点击蓝字
关注我们
AI TIME欢迎每一位AI爱好者的加入!
哔哩哔哩直播通道
扫码关注AITIME哔哩哔哩官方账号预约直播
11月16日
20:00-21:00
Bio
NanDing:
Dr. Nan Ding is a research scientist at Google Research. He obtained his Bachelor's degree from EE at Tsinghua University in 2008 and a Ph.D. degree from CS at Purdue University in 2013. As a researcher, he has published over 40 papers in the field of machine learning and quantum computation. His papers have been published in top conferences and journals, including NeurIPS, ICML, CVPR, ICCV, ECCV, ACL, Nature Physics, etc.
Title
CausalLM is not optimal for in-context learning
Abstract
Recent empirical evidence indicates that transformer based in-context learning performs better when using a prefix language model (prefixLM), in which in-context samples can all attend to each other, compared to causal language models (causalLM), which use auto-regressive attention that prohibits in-context samples to attend to future samples. While this result is intuitive, it is not understood from a theoretical perspective. In this paper we take a theoretical approach and analyze the convergence behavior of prefixLM and causalLM under a certain parameter construction. Our analysis shows that both LM types converge to their stationary points at a linear rate, but that while prefixLM converges to the optimal solution of linear regression, causalLM convergence dynamics follows that of an online gradient descent algorithm, which is not guaranteed to be optimal even as the number of samples grows infinitely. We supplement our theoretical claims with empirical experiments over synthetic and real tasks and using various types of transformers. Our experiments verify that causalLM consistently underperforms prefixLM in all settings.
添加“AI TIME小助手(微信号:AITIME_HY)”,回复“大模型”,将拉您进“AI TIME 大模型 交流群”!
AI TIME微信小助手
往期精彩文章推荐
关注我们 记得星标
关于AI TIME
AI TIME源起于2019年,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法和场景应用的本质问题进行探索,加强思想碰撞,链接全球AI学者、行业专家和爱好者,希望以辩论的形式,探讨人工智能和人类未来之间的矛盾,探索人工智能领域的未来。
迄今为止,AI TIME已经邀请了1400多位海内外讲者,举办了逾600场活动,超600万人次观看。
我知道你
在看
哦
~
点击 阅读原文 预约直播!