How does deep residual learning work?

1 篇文章 0 订阅
1 篇文章 0 订阅

Deep Residual Learning network is a very intriguing network that was developed by researchers from Microsoft Research. The results are quite impressive in that it received first place in ILSVRC 2015 image classification. The network that they used had 152 layers, an impressive 8 times deeper than a comparable VGG network. This is a snapshot from the paper: http://arxiv.org/pdf/1512.03385v... comparing their network with a similarly constructed VGG Convolution Network:

The claim however by Jurgen Schmidhuber is that it is the same thing as an LSTM without gates. (see: Microsoft Wins ImageNet 2015 through Feedforward LSTM without Gates ). Which does seem accurate if you take a look at what an LSTM node looks like:

In other words the inputs of a lower layer is made available to a node in a higher layer. The difference of course is that the Microsoft Residual Network when applied to image classification tasks employs convolution processing layers in its construction. Schmidhuber's research group has published results of "Highway Networks": http://arxiv.org/pdf/1507.06228v... with depths up to 100 layers.

However despite the similarities between LSTM and Highway Networks with Residual Network, the results are quite impressive in that it shows state-of-the-art results for a very deep neural network of 152 layers. A recent paper from Weizmann Institute of Science http://arxiv.org/pdf/1512.03965.... has a mathematical proof that reveals the utility of having deeper networks than that of wider networks. The implication of these three results are that future Deep Learning progress will lead to the development of even deeper networks.

Google's GoogleNet has 22 layers, this was published in late 2014. Two generations later, Google mentioned its Inception 7 network that had 50+ layers.

In all the Residual, Highway and Inception networks, you will notice that the same inputs do travel through paths and different number of layers.

The trend is pretty clear. Not only are Deeper Neural Networks more accurate, they in addition require less weights.

Update: Two recent papers have shown (1) Residual Nets being equivalent to RNN and (2) Residuals Nets acting more like ensembles across several layers.


原文地址: https://www.quora.com/How-does-deep-residual-learning-work

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值