Convolutional Neural Networks (CNNs / ConvNets) 翻译第二段

翻译 2017年01月03日 10:15:40

Architecture Overview


Recall: Regular Neural Nets. As we saw in the previous chapter, Neural Networks receive an input (a single vector), and transform it through a series of hidden layers. Each hidden layer is made up of a set of neurons, where each neuron is fully connected to all neurons in the previous layer, and where neurons in a single layer function completely independently and do not share any connections. The last fully-connected layer is called the “output layer” and in classification settings it represents the class scores.
常规神经网络:就如我们在之前文章中知道的,神经网络接受一个输入,通过一系列隐含层将它转变。每一层由一系列神经元组成,这些神经元与前一层的神经元全连接,而且同一层的神经元完全独立不分享任何连接。最后一个全连接层名为“输出层”,在分类设置中它代表类分数。

Regular Neural Nets don’t scale well to full images. In CIFAR-10, images are only of size 32x32x3 (32 wide, 32 high, 3 color channels), so a single fully-connected neuron in a first hidden layer of a regular Neural Network would have 32*32*3 = 3072 weights. This amount still seems manageable, but clearly this fully-connected structure does not scale to larger images. For example, an image of more respectible size, e.g. 200x200x3, would lead to neurons that have 200*200*3 = 120,000 weights. Moreover, we would almost certainly want to have several such neurons, so the parameters would add up quickly! Clearly, this full connectivity is wasteful and the huge number of parameters would quickly lead to overfitting.
常规神经网络不能很好地缩放整个图片,在CIFAR-10中,图片大小只能设定为32x32x3,所以位于第一个隐含层的神经网络只有32x32x3=3072个权重,这个数量仍然看起来是可以管理的,但是显而易见这个全连接的体系结构不能缩放更大的图片。举个例子,一个更大的图片,比如200*200*3,将会导致神经元200*200*3个权重,更坏的是,我们显而易见需要更多这样的神经元。所以参数会添加的更快。因此,这种全连接式的神经网络是浪费的,同时如此之多的参数将会很快导致过拟合。
3D volumes of neurons. Convolutional Neural Networks take advantage of the fact that the input consists of images and they constrain the architecture in a more sensible way. In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth. (Note that the word depth here refers to the third dimension of an activation volume, not to the depth of a full Neural Network, which can refer to the total number of layers in a network.) For example, the input images in CIFAR-10 are an input volume of activations, and the volume has dimensions 32x32x3 (width, height, depth respectively). As we will soon see, the neurons in a layer will only be connected to a small region of the layer before it, instead of all of the neurons in a fully-connected manner. Moreover, the final output layer would for CIFAR-10 have dimensions 1x1x10, because by the end of the ConvNet architecture we will reduce the full image into a single vector of class scores, arranged along the depth dimension. Here is a visualization:
神经元的三维体积:卷积神经网络利用输入为图片同时图片输入将体系结构约束得更加有效。尤其,不同于传统神经网络,卷积网络层的神经元是从三种维度上组织的:长宽高。(强调“depth”这个词指代的是活化体积的第三位,而不是整个神经网络的深度,神经网络的深度指的是神经网络总的层数 )例如,以CIFAR格式传入进来的输入图片是一种输入活化体积,这个体积有维度32*32*3(分别为宽度,长度,深度)。我们将会看到,本层的神经元只能看到前一层的很小一块区域,而不会是一种全部神经元的全连接方式。此外,最终输出层将会为CIFAR-10同时为维度1*1*10,因为使用卷积体系结构的最终目标是将一个完整的图片降维为表示类分数的单维向量,以深度层次来组织。


这几段就是在告诉我们,传统神经网络的体系结构太过臃肿,在处理图像上面有天然的劣势。卷积神经网络的体系结构天生就有处理图片的优势。

Convolutional Neural Networks (CNNs / ConvNets)

CNN的网络结构最经使用CNN做regression,怎么确定网络的结构确实太头痛了,尽管已经有现有的论文里面已经又很多mode可以参考,例如AlexNet,VGGNet等等,但是和还是很难确定出一个...
  • oJiFengJinCao12
  • oJiFengJinCao12
  • 2016年01月11日 10:45
  • 865

Convolutional Neural Networks (CNNs / ConvNets)

Table of Contents: Architecture OverviewConvNet Layers Convolutional LayerPooling LayerNormali...
  • mao_kun
  • mao_kun
  • 2016年01月16日 00:00
  • 1780

Convolutional Neural Networks (CNNs / ConvNets) 翻译第二段

Architecture Overview Recall: Regular Neural Nets. As we saw in the previous chapter, Neura...
  • tz1215354983
  • tz1215354983
  • 2017年01月03日 10:15
  • 154

An Intuitive Explanation of Convolutional Neural Networks

What are Convolutional Neural Networks and why are they important? Convolutional Neural Networks ...
  • qq_27231343
  • qq_27231343
  • 2016年09月01日 23:31
  • 278

卷积神经网络, Convolutional Neural Networks , CNN

1,简介 CNN是deep learning在图像处理领域的一个应用 由具有可学习的权重和偏置常量(biases)的神经元组成,是deeplearning在图像处理的一个应用 2,卷积层(C...
  • qq_34562093
  • qq_34562093
  • 2018年01月20日 17:21
  • 38

论文笔记之Learning Convolutional Neural Networks for Graphs

本篇论文是2016ICML上的一篇论文,对于如何将cnn应用在graph上提供了一种新的思路。架构: 总体上讲,就是用w个固定size=(k+1)的子图来表示输入的graph,再将这w个子图正则...
  • BVL10101111
  • BVL10101111
  • 2016年12月06日 11:07
  • 2596

深度卷积神经网络CNNs的多GPU并行框架 及其在图像识别的应用

本文是腾讯深度学习系列文章的第二篇,聚焦于腾讯深度学习平台(Tencent Deep Learning Platform)中深度卷积神经网络Deep CNNs的多GPU模型并行和数据并行框架。 ...
  • vbskj
  • vbskj
  • 2018年02月07日 16:03
  • 15

Convolutional neural networks(CNN) (一) 入门参考

笔者对 Convolutional neural networks(CNN) 卷积神经网络入门学习资料使用汇总: { 可以按照本文顺序进行了解,亦可以根据个人之前的掌握,灵活调整 。笔者也尚在学习,...
  • baidu_24281959
  • baidu_24281959
  • 2016年07月25日 20:43
  • 733

导读ICML2016 - Learning Convolutional Neural Networks for Graphs

http://mp.weixin.qq.com/s?__biz=MzIxNzE2MTM4OA==&mid=2665642794&idx=1&sn=b11ec8699218d43bb7b404a4980...
  • mmc2015
  • mmc2015
  • 2016年06月20日 15:50
  • 1708

Notes on Convolutional Neural Networks(阅读)

这篇文章是大家熟悉的CNN,这是被埋没了很久的一篇,是金子总会发光。 =================================================================...
  • langb2014
  • langb2014
  • 2015年09月15日 18:08
  • 4320
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:Convolutional Neural Networks (CNNs / ConvNets) 翻译第二段
举报原因:
原因补充:

(最多只允许输入30个字)