AlexNet文献阅读06

7结论

“7 Discussion” (Krizhevsky 等, 2017, p. 8)

“Our results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.” (Krizhevsky 等, 2017, p. 8) 我们的结果表明,一个大型的深度卷积神经网络能够在一个极具挑战性的数据集上使用纯监督学习实现破纪录的结果。值得注意的是,如果去除单个卷积层,我们的网络性能会下降。例如,删除任何中间层都会导致网络的top - 1性能损失约2 %。因此,深度对于取得我们的成果确实很重要。

“To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infero-temporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.” (Krizhevsky 等, 2017, p. 8) 为了简化实验,我们没有使用任何无监督的预训练,尽管我们预计这将会有帮助,特别是如果我们获得足够的计算能力来显著增加网络的规模,而没有获得相应的标记数据量的增加。到目前为止,我们的结果已经有所改善,因为我们已经使我们的网络变得更大,并且训练了更长的时间,但是为了匹配人类视觉系统的时间内路径,我们还有很多数量级的时间。最终,我们希望在视频序列上使用非常大和深度的卷积网络,其中时间结构提供了非常有用的信息,而这些信息在静态图像中缺失或不明显。

解读

(1)神经网络的深度很重要,删除中间层会使top-1性能损失约2%

(2)网络是在无预训练的情况下进行的

(3)希望在视频上使用更大的神经网络

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一个学术垃圾

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值