人工智能领域有哪些优秀的工作第一次是被拒稿的,被什么拒了。现在怎么样? 拒稿理由是什么?...

来源:https://www.zhihu.com/question/356973658

编辑:深度学习与计算机视觉

声明:仅做学术分享,侵删

作者:郭沛
https://www.zhihu.com/question/356973658/answer/905749977

计算机视觉里有个SIFT特征,在深度学习之前独领风骚,但是原作者David Lowe 亲自承认原稿被CVPR 和 ICCV 拒了两次:

I did submit papers on earlier versions of SIFT to both ICCV and CVPR (around 1997/98) and both were rejected. I then added more of a systems flavor and the paper was published at ICCV 1999, but just as a poster. By then I had decided the computer vision community was not interested, so I applied for a patent and intended to promote it just for industrial applications.

Another recent example is Rob Fergus's tiny images paper, which never did appear in a conference, but already has had a strong impact. I'm sure there are hundreds of other examples.

另一个例子来自Alan Yuille,他平庸的文章被收作Oral,在意的文章却被拒多次。他认为论文评审制度已经崩溃,因为每年提交到顶级会议的文章太多,reviewer都不够用了。这些会议鼓励渐进性的创新和短期的影响,甚至奖励表面上好看但存在严重内在缺陷的文章:

At present, my mediocre papers get accepted with oral presentations, while my interesting novel work gets rejected several times. By contrast, my journal reviewers are a lot slower but give much better comments and feedback. [....]

I think the current system is breaking down badly due to the enormous number of papers submitted to these meetings (NIPS, ICML, CVPR, ICCV, ECCV) and the impossibility of getting papers reviewed properly. The system encourages the wrong type of papers and encourages attention on short term results and minor variations of existing methods. Even worse it rewards papers which look superficially good but which fail to survive the more serious reviewing done by good journals (there have been serious flaws in some of the recent prize-winning computer vision papers).

上面几个例子都是Yann Lecun 引用来说明ICLR 的open review重要性的。虽然没有证据表明ICLR 审稿质量比别的会好多少,但至少可以公开出来让人们看清楚现实。

作者:知乎用户
https://www.zhihu.com/question/356973658/answer/905604980

Layer Norm,从技术难度上来说并没有什么不得了,不过对后来的research影响力很大,目前从未中过。2016至今引用过千。

RMSProp不知道是没投过还是没中过。ADAM只是在RMSProp上又做了些许改进,而且很多时候RMSProp表现都不比ADAM差,目前只能在Hinton的lecture slides里看到描述。考虑到ADAM引用已经过三万,RMSProp可以认为是极高影响力的结果了

作者:LinT
https://www.zhihu.com/question/356973658/answer/904605994

Hinton的知识蒸馏开山作品:Distilling the Knowledge in a Neural Network。

当年被NIPS2014拒了:https://twitter.com/OriolVinyalsML/status/1129420305246629899

(Oriol Vinyals是这篇文章的作者之一)

这篇论文非常好,我个人非常喜欢。简单的idea,启发了后续很多工作,算是开了一个新的方向。众所周知,现在提到神经网络模型压缩,一定是剪枝、量化、蒸馏三个方法了。可见这篇工作的意义。

拒稿原因嘛,我不清楚,知道的知友欢迎补充。

P.S. 其实好的工作被拒也是常有的事情,一些工作想法太超前,或是不符合会议评审的品味,或者一些更奇怪的原因,就被拒啦~Hinton之前也吐槽过(图片截取自讲习班视频:https://fcrc.acm.org/turing-lecture-at-fcrc-2019):

     不断更新资源

     获取更多精彩

长按二维码扫码关注

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值