BERT系列经典文章阅读

BERT系列经典文章阅读

[1] BERT
原文:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
笔记:
论文笔记–BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

[2] RoBERT
原文:RoBERTa: A Robustly Optimized BERT Pretraining Approach
笔记:论文笔记–RoBERTa: A Robustly Optimized BERT Pretraining Approach

[3] SBERT
原文:Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
笔记:论文笔记–Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

[4] ERNIE
原文:ERNIE: Enhanced Representation through Knowledge Integration
笔记:论文笔记–ERNIE: Enhanced Representation through Knowledge Integration

[5] ERNIE 2.0
原文:ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding
笔记:论文笔记–ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding

[6] ERNIE 3.0
原文:ERNIE 3.0: LARGE-SCALE KNOWLEDGE ENHANCED PRE-TRAINING FOR LANGUAGE UNDERSTANDING AND GENERATION
笔记:论文笔记–ERNIE 3.0: LARGE-SCALE KNOWLEDGE ENHANCED PRE-TRAINING FOR LANGUAGE UNDERSTANDING

[7] ALBERT
原文:ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS
笔记:论文笔记–ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS

[8] XLNET
原文:XLNet: Generalized Autoregressive Pretraining for Language Understanding
笔记:论文笔记–XLNet: Generalized Autoregressive Pretraining for Language Understanding

[9] XLMs
原文:Cross-lingual Language Model Pretraining
笔记:论文笔记–Cross-lingual Language Model Pretraining

[10] PANGU- α \alpha α
原文:PANGU-α: LARGE-SCALE AUTOREGRESSIVE PRETRAINED CHINESE LANGUAGE MODELS WITH AUTO-PARALLEL COMPUTATION
笔记:论文笔记–PANGU-α

[11] SimCSE
原文:SimCSE: Simple Contrastive Learning of Sentence Embeddings
笔记:论文笔记–SimCSE: Simple Contrastive Learning of Sentence Embeddings

[12] StructBERT
原文:STRUCTBERT: INCORPORATING LANGUAGE STRUCTURES INTO PRE-TRAINING FOR DEEP LANGUAGE UNDERSTANDING
笔记:论文笔记–STRUCTBERT: INCORPORATING LANGUAGE STRUCTURES INTO PRE-TRAINING FOR DEEP LANGUAGE UNDERSTANDIN

[13] BERT-flow
原文:On the Sentence Embeddings from Pre-trained Language Model
笔记:论文笔记–On the Sentence Embeddings from Pre-trained Language Models

[14] DistilBERT
原文:DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
笔记:论文笔记–DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

[15] TinyBERT
原文:TinyBERT: Distilling BERT for Natural Language Understanding
笔记:论文笔记–TinyBERT: Distilling BERT for Natural Language Understanding

[16] ERNIE
原文:ERNIE: Enhanced Language Representation with Informative Entities
笔记:论文笔记–ERNIE: Enhanced Language Representation with Informative Entities

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值