How to Fine-Tune BERT for Text Classification?

investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning.

investigate the different approaches to fine-tuning BERT for the text classification task. There are some experimental findings:

1) The top layer of BERT is more useful for text classification;

2) With an appropriate layer-wise decreasing learning rate, BERT can overcome the catastrophic forgetting problem;

3) Within-task and in-domain further pre-training can significantly boost its performance;

4) A preceding multi-task fine-tuning is also helpful to the single-task fine-tuning, but its benefit is smaller than further pre-training;

5) BERT can improve the task with small-size data.

论文充分借鉴了ULMFiT的思想,设计了一系列fine-tune和pre-train的策略,根据使用语料的范围可分为:
(1)直接针对task的fine-tune
(2)基于In-Domain语料的pre-train+fine-tune
(3)基于In-Domain语料的pre-train+多任务fine-tune
(4)基于In-Out-Domain语料的pre-train+fine-tune
(5)基于In-Out-Domain语料的pre-train+多任务fine-tune

reference

https://blog.csdn.net/guofei_fly/article/details/105409440

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值