0.5%入选率!NeurIPS 2019热门论文作者独家专访

NeurIPS 2019年大会吸引了超过13000名参与者,论文接受率为21.17%,仅36篇获得15分钟口头报告资格。Robin.ly对其中20位作者进行了专访,包括麻省理工学院关于物体识别、主成分回归分析鲁棒性及内核工具变量回归的三篇热点论文的作者。这些研究涵盖了大脑启发的物体识别、主成分回归的噪声和缺失数据处理以及非线性关系的因果推断方法。
摘要由CSDN通过智能技术生成

????点击上方蓝字星标“Robinly”,获取更多重磅AI访谈

Robin.ly 是立足硅谷的视频内容平台,服务全球工程师和研究人员,通过与知名人工智能科学家、创业者、投资人和领导者的深度对话和现场交流活动,传播行业动态和商业技能,打造人才全方位竞争力。

2019年12月8日,神经计算和机器学习领域规模最大的顶会 NeurIPS(神经信息处理系统会议)于加拿大温哥华拉开帷幕。因注册人数过多,今年参会门票都要凭运气抽奖决定。据大会官方统计,今年参会总人数已经突破了 13000 人,与2018年相比,几乎上涨了50%。

今年大会论文投稿数量也创下了历史新高,最终,共提交6743 篇有效论文,接收 1428 篇,接受率为 21.17%。而其中只有36篇获得15分钟的口头报告资格,入选率仅为0.53%!Robin.ly在大会现场特邀其中20位作者,快速分享论文亮点和应用场景。

今天首先分享三篇来自麻省理工学院不同方向的热点论文,包括:

  1. Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs

  2. On Robustness of Principal Component Regression

  3. Kernel Instrumental Variable Regression

我们也特别邀请到图灵奖得主、深度学习三巨头之一的Yoshua Bengio、两位大会最佳论文作者、以及多位AI大牛现场深度对话,更多精彩内容将陆续分享,关注我们的公众号Robinly,及时获得更新!

长按二维码或点击“阅读原文”

获取更多NeurIPS英文访谈实录

1

  类似大脑的物体识别

论文:Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs

作者:Jonas Kubilius,Martin Schrimpf 等

机构:麻省理工大学、湾区实验室、纽约大学、哥伦比亚大学、斯坦福大学

论文链接:

https://arxiv.org/abs/1909.06161

Martin Schrimpf是麻省理工大学博士研究生,师从James J. DiCarlo教授,从事神经科学与机器学习方向的研究。在论文中,他们提出了可以量化比较大脑系统和人工模型表现的Brain Score,并参照人类大脑的工作原理,重新设计开发了具有循环结构的浅层人工神经网络CORnet-S,获得了类似大脑视觉系统的物体识别表现。

论文摘要:Deep convolutional artificial neural networks (ANNs) are the leading class of candidate models of the mechanisms of visual processing in the primate ventral stream. While initially inspired by brain anatomy, over the past years, these ANNs have evolved from a simple eight-layer architecture in AlexNet to extremely deep and branching architectures, demonstrating increasingly better object categorization performance, yet bringing into question how brain-like they still are. In particular, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. Here we demonstrate that better anatomical alignment to the brain and high performance on machine learning as well as neuroscience measures do not have to be in contradiction. We developed CORnet-S, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and behavioral benchmarks for quantifying the functional fidelity of models of the primate ventral visual stream. Despite being significantly shallower than most models, CORnet-S is the top model on Brain-Score and outperforms similarly compact models on ImageNet. Moreover, our extensive analyses of CORnet-S circuitry variants reveal that recurrence is the main predictive factor of both Brain-Score and ImageNet top-1 performance. Finally, we report that the temporal evolution of the CORnet-S "IT" neural population resembles the actual monkey IT population dynamics. Taken together, these results establish CORnet-S, a compact, recurrent ANN, as the current best model of the primate ventral visual stream.

图片来源:Martin Schrimpf

2

  主成分回归分析的鲁棒性

论文:On Robustness of Principal Component Regression

作者:Anish Agarwal,Devavrat Shah,Dennis Shen,Dogyoon Song

机构:麻省理工学院

论文链接:

https://arxiv.org/abs/1902.10920

Anish Agarwal是麻省理工学院电子电气与计算机学三年级博士研究生,其研究方向是高维数理统计以及数据市场设计。这篇论文通过严格的理论分析,证实在工业界广泛使用的主成分回归分析方法(principle component regression;PCR),在处理噪声、离散、缺失数据方面具有很强的鲁棒性,甚至超越现实回归。近年来,保护用户隐私的需求日益增长,产生了大量不完整数据,这种分析方法也越发重要。

论文摘要:Consider the setting of Linear Regression where the observed response variables, in expectation, are linear functions of the p-dimensional covariates. Then to achieve vanishing prediction error, the number of required samples scales faster than pσ2, where σ2 is a bound on the noise variance. In a high-dimensional setting where p is large but the covariates admit a low-dimensional representation (say r ≪ p), then Principal Component Regression (PCR), cf. [36], is an effective approach; here, the response variables are regressed with respect to the principal components of the covariates. The resulting number of required samples to achieve vanishing prediction error now scales faster than rσ2(≪ pσ2). Despite the tremendous utility of PCR, its ability to handle settings with noisy, missing, and mixed (discrete and continuous) valued covariates is not understood and remains an important open challenge, cf. [24]. As the main contribution of this work, we address this challenge by rigorously establishing that PCR is robust to noisy, sparse, and possibly mixed valued covariates. Specifically, under PCR, vanishing prediction error is achieved with the number of samples scaling as r max(σ2, ρ−4 log5(p)), where ρ denotes the fraction of observed (noisy) covariates. We establish generalization error bounds on the performance of PCR, which provides a systematic approach in selecting the correct number of components r in a data-driven manner. The key to our result is a simple, but powerful equivalence between (i) PCR and (ii) Linear Regression with covariate pre-processing via Hard Singular Value Thresholding (HSVT). From a technical standpoint, this work advances the state-of-the-art analysis for HSVT by establishing stronger guarantees with respect to the ∥·∥2,∞-error for the estimated matrix rather than the Frobenius norm/mean-squared error (MSE) as is commonly done in the matrix estimation / completion literature.

图片来源:Anish Agarwal

3

内核工具变量回归

论文:Kernel Instrumental Variable Regression

作者:Rahul Singh,Maneesh Sahani,Arthur Gretton

机构:麻省理工学院、伦敦大学学院

论文链接:

https://arxiv.org/abs/1906.00232

Rahul Singh是麻省理工经济与统计学三年级博士生,其主要研究方向是因果推断与统计学习理论。本篇文章的综合运用了计量经济学与深度学习理论,提出了一种基于非线性关系的核心工具变量回归方法(kernel instrumental variable regression)。该论文的算法模型仅有三行代码,却可以潜在应用于分析具有混杂关系的数据,如市场需求分析 (market demand) 和部分依从(imperfect complaince)的临床对照实验。

论文摘要:Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X and the unmeasured confounder. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functions. We propose kernel instrumental variable regression (KIV), a nonparametric generalization of 2SLS, modeling relations among X, Y, and Z as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild assumptions, and derive conditions under which convergence occurs at the minimax optimal rate for unconfounded, single-stage RKHS regression. In doing so, we obtain an efficient ratio between training sample sizes used in the algorithm's first and second stages. In experiments, KIV outperforms state of the art alternatives for nonparametric IV regression.

图片来源:Rahul Singh

关注Robin.ly “Leaders In AI” Podcast

收听完整英文访谈


相关阅读

斯坦福AI Lab主任Chris Manning:我的第一次CVPR

李飞飞团队、康奈尔Weinberger团队、密歇根大学最新CVPR热点论文作者解读

CVPR 2019最佳论文得主专访:非视距形状重建的费马路径理论

CVPR2019爆款论文作者现场解读:视觉语言导航、运动视频深度预测、6D姿态估计

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值