BERT and RoBERTa 知识点整理

往期文章链接目录

BERT Recap

Overview

  • Bert (Bidirectional Encoder Representations from Transformers) uses a “masked language model” to randomly mask some tokens from the input and predict the original vocabulary id of the masked token.
  • Bert shows that “pre-trained representations reduce the need for many heavily-engineered task-specific architectures”.

BERT Specifics

There are two steps to the BERT framework: pre-training and fine-tuning
  • During pre training, the model is trained on unlabeled data over different pre-training tasks.

  • Each down stream task has separate fine-tuned models after each is first initialized with pre-trained parameters.

Input Output Representations

  • In order to handle a variety of down-stream tasks, the input must be able to represent a single sentence and sentence pair in one sequence.

  • The first token of every sequence is always a classification token [CLS].

  • Sentence pairs are separated by a special token [SEP].

  • Learned embeddings are add

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值