Improving neural networks by preventing co-adaptation of feature detectors (译文)

F Convolutional Neural Networks

其次,在CNN的神经元适用的过滤器,只在局部范围内和那些在形态组织上集中的神经元位置。这是合理的,对于数据集,在自然图像的像素情况下,我们期望输入尺寸的依赖性是距离的递减函数。特别是,我们预计有用的线索,以输入图像对象的身份可以通过检查图像的小的局部邻域找到。第三,在组的所有神经元应用相同的过滤器,但作为刚刚提到的,它们适用于它在输入图像的不同位置。这是合理的数据集大致静止的统计数据,如自然的图像。我们预计,同种结构可以出现在输入图像的所有位置,因此它是合理的,通过过滤它们以同样的方式平等对待所有位置。以这种方式,CNN的一组神经元对它的输入用卷积运算处理。通常在CNN的单层乘以多组神经元,每个执行不同的滤波卷积。这些神经元组成为下一层不同的输入通道。在像素级上,在卷积组神经元邻域感受域的边界间的距离决定应用卷积运算的步长。较大的步长意味着每个组较少的神经元。我们的模型一个像素用一个步长,除非另有说明。

此卷积共享滤波器结构中的一个重要结果是大幅度减少了参数的数目,相对于应用不同的过滤器的所有神经元的一个神经网络。这减少了网络的代表能力,但它也降低了过拟合的能力,所以dropout远不如卷积层有优势。

F.1 Pooling

通常cnns还设有“池化”层,其总结了卷积层神经元的局部块的活动。本质上,一个池化层通过卷积层和下采样层后的输出作为输入。一个池化层包含多个池化单元,可形态的布置,并连接到从同组卷积单元输出的一个局部邻域。每个池化单元接着计算一些多组邻域的输出函数。通常函数有最大和平均两种。池化层有这样的单元,分别叫做最大池化和平均池化。池化单元通常间隔去掉至少几个像素,所以池化单元的总数要少于上层卷积单元的输出。使得间隔小于邻域的大小, 池化单元概括产生重叠池化。这种变异使池化层产生的卷积单元输出粗糙的编码,这是我们已经发现,以帮助推广在我们的实验。我们将此间距作为池化单元之间的步长,类似于卷积单元之间的步长。池层引入的局部平移不变性到网络,从而提高了概括的水平。他们是复杂的细胞在哺乳动物的视觉皮层的类似物,这集中多个简单的细胞活动。这些细胞已知表现出类似相不变性性质。

F.2 Local response normalization

我们的网络也包括响应标准化层。这种类型的层鼓励属于不同组的神经元之间的大型激活竞争。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
不管你想做什么,你都要好好的从论文看,而不是单纯的调论文写代码!通过这些学习,你才能真正的对深度学习的发展,模型的优化,进经典的trick有深入的理解! 做算法,做科研必不可少!时间有限的人可以只看1.3 2.1 2.2 !(强烈推荐!) ## 1.3 ImageNet Evolution(Deep Learning broke out from here) **[4]** Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "**Imagenet classification with deep convolutional neural networks**." Advances in neural information processing systems. 2012. [[pdf]](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) **(AlexNet, Deep Learning Breakthrough)** :star::star::star::star::star: **[5]** Simonyan, Karen, and Andrew Zisserman. "**Very deep convolutional networks for large-scale image recognition**." arXiv preprint arXiv:1409.1556 (2014). [[pdf]](https://arxiv.org/pdf/1409.1556.pdf) **(VGGNet,Neural Networks become very deep!)** :star::star::star: **[6]** Szegedy, Christian, et al. "**Going deeper with convolutions**." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. [[pdf]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf) **(GoogLeNet)** :star::star::star: **[7]** He, Kaiming, et al. "**Deep residual learning for image recognition**." arXiv preprint arXiv:1512.03385 (2015). [[pdf]](https://arxiv.org/pdf/1512.03385.pdf) **(ResNet,Very very deep networks, CVPR best paper)** :star::star::star::star::star: #2 Deep Learning Method ## 2.1 Model **[14]** Hinton, Geoffrey E., et al. "**Improving neural networks by preventing co-adaptation of feature detectors**." arXiv preprint arXiv:1207.0580 (2012). [[pdf]](https://arxiv.org/pdf/1207.0580.pdf) **(Dropout)** :star::star::star: **[15]** Srivastava, Nitish, et al. "**Dropout: a simple way to prevent neural networks from overfitting**." Journal of Machine Learning Research 15.1 (2014): 1929-1958. [[pdf]](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) :star::star::star: **[16]** Ioffe, Sergey, and Christian Szegedy. "**Batch normalization: Accelerating deep network training by reducing internal covariate shift**." arXiv preprint arXiv:1502.03167 (2015). [[pdf]](http://arxiv.org/pdf/1502.03167) **(An outstanding Work in 2015)** :star::star::star::star: **[17]** Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "**Layer normalization**." arXiv preprint arXiv:1607.06450 (2016). [[pdf]](https://arxiv.org/pdf/1607.06450.pdf?utm_source=sciontist.com&utm_medium=refer&utm_campaign=promote) **(Update of Batch Normalization)** :star::star::star::star: **[18]** Courbariaux, Matthieu, et al. "**Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1**." [[pdf]](https://pdfs.semanticscholar.org/f832/b16cb367802609d91d400085eb87d630212a.pdf) **(New Model,Fast)** :star::star::star: **[19]** Jaderberg, Max, et al. "**Decoupled neural interfaces using synthetic gradients**." arXiv preprint arXiv:1608.05343 (2016). [[pdf]](https://arxiv.org/pdf/1608.05343) **(Innovation of Training Method,Amazing Work)** :star::star::star::star::star: **[20]** Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015). [[pdf]](https://arxiv.org/abs/1511.05641) **(Modify previously trained network to reduce training epochs)** :star::star::star: **[21]** Wei, Tao, et al. "Network Morphism." arXiv preprint arXiv:1603.01670 (2016). [[pdf]](https://arxiv.org/abs/1603.01670) **(Modify previously trained network to reduce training epochs)** :star::star::star: ## 2.2 Optimization **[22]** Sutskever, Ilya, et al. "**On the importance of initialization and momentum in deep learning**." ICML (3) 28 (2013): 1139-1147. [[pdf]](http://www.jmlr.org/proceedings/papers/v28/sutskever13.pdf) **(Momentum optimizer)** :star::star: **[23]** Kingma, Diederik, and Jimmy Ba. "**Adam: A method for stochastic optimization**." arXiv preprint arXiv:1412.6980 (2014). [[pdf]](http://arxiv.org/pdf/1412.6980) **(Maybe used most often currently)** :star::star::star: **[24]** Andrychowicz, Marcin, et al. "**Learning to learn by gradient descent by gradient descent**." arXiv preprint arXiv:1606.04474 (2016). [[pdf]](https://arxiv.org/pdf/1606.04474) **(Neural Optimizer,Amazing Work)** :star::star::star::star::star: **[25]** Han, Song, Huizi Mao, and William J. Dally. "**Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding**." CoRR, abs/1510.00149 2 (2015). [[pdf]](https://pdfs.semanticscholar.org/5b6c/9dda1d88095fa4aac1507348e498a1f2e863.pdf) **(ICLR best paper, new direction to make NN running fast,DeePhi Tech Startup)** :star::star::star::star::star: **[26]** Iandola, Forrest N., et al. "**SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size**." arXiv preprint arXiv:1602.07360 (2016). [[pdf]](http://arxiv.org/pdf/1602.07360) **(Also a new direction to optimize NN,DeePhi Tech Startup)** :star::star::star::star:

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值