谷歌研究员Nan Ding开讲| CausalLM is not optimal for in-context learning

点击蓝字

63ad11d66b082c31f1e98a2ca045f9b6.jpeg

关注我们

AI TIME欢迎每一位AI爱好者的加入!

686157126340e08b495f8cc3e9bc5f0b.jpeg

哔哩哔哩直播通道

扫码关注AITIME哔哩哔哩官方账号预约直播

11月16日 

 20:00-21:00  

10dd1301c42cffc2f4cd4c1ec7d1c2e7.png

Bio

NanDing:

Dr. Nan Ding is a research scientist at Google Research. He obtained his Bachelor's degree from EE at Tsinghua University in 2008 and a Ph.D. degree from CS at Purdue University in 2013. As a researcher, he has published over 40 papers in the field of machine learning and quantum computation. His papers have been published in top conferences and journals, including NeurIPS, ICML, CVPR, ICCV, ECCV, ACL, Nature Physics, etc.

Title

CausalLM is not optimal for in-context learning

Abstract

Recent empirical evidence indicates that transformer based in-context learning performs better when using a prefix language model (prefixLM), in which in-context samples can all attend to each other, compared to causal language models (causalLM), which use auto-regressive attention that prohibits in-context samples to attend to future samples. While this result is intuitive, it is not understood from a theoretical perspective. In this paper we take a theoretical approach and analyze the convergence behavior of prefixLM and causalLM under a certain parameter construction. Our analysis shows that both LM types converge to their stationary points at a linear rate, but that while prefixLM converges to the optimal solution of linear regression, causalLM convergence dynamics follows that of an online gradient descent algorithm, which is not guaranteed to be optimal even as the number of samples grows infinitely. We supplement our theoretical claims with empirical experiments over synthetic and real tasks and using various types of transformers. Our experiments verify that causalLM consistently underperforms prefixLM in all settings.

添加“AI TIME小助手(微信号:AITIME_HY)”,回复“大模型”,将拉您进“AI TIME 大模型 交流群”!

AI TIME微信小助手

593a4a5f67874b87b83ac521e29a8bb3.jpeg

往期精彩文章推荐

4b06dfa52c2d4c46af737625a5705496.jpeg

关注我们 记得星标

 关于AI TIME 

AI TIME源起于2019年,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法和场景应用的本质问题进行探索,加强思想碰撞,链接全球AI学者、行业专家和爱好者,希望以辩论的形式,探讨人工智能和人类未来之间的矛盾,探索人工智能领域的未来。

迄今为止,AI TIME已经邀请了1400多位海内外讲者,举办了逾600场活动,超600万人次观看。

1254fc6865adab63eb9c6e36b5e4576e.png

我知道你

在看

~

c76c5542a8f8649bc477b8a8b93b210c.gif

点击 阅读原文 预约直播!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值