温故而知新,笔记拖了一个多月了。
如果没有图像处理经验,那么在理解卷积神经网络的起点上会有一些麻烦,使用算子做边缘检测并不是在神经网络应用在图像上才有的,卷积、填充也不是。只要多看几遍视频,理解也不是什么难事。
1. 卷积神经网络的起点
Edge Detection - Filter/Kernel:深度网络之前特殊设计的filter可以用于检测比如竖直/水平方向的边界(比如Sobel算子),深度网络中并不需要手工指定,filter本身也是可以学习的。
符号约定:
n n n: 图像大小, n h n_h nh n w n_w nw和 n c n_c nc分别代别高宽和channel(比如RGB)
f f f: filter大小,( f × f × n c f \times f \times n_c f×f×nc), f f f一般是奇数, n c n_c nc与待卷积的输入一致
p p p: padding大小
2. 卷积操作
-
卷积: 非严格意义卷积 element wise product - filter * input(不仅是image,还包括上一层网络的输出)
-
padding: 边缘填充
- n × n n \times n n×n会变成 ( n + 2 p − f + 1 ) × ( n + 2 p − f + 1 ) (n+2p-f+1) \times (n + 2p - f + 1) (n+2p−f+1)×(n+2p−f+1)
- Valid: 无填充 p = 0 p=0 p=0
- Same: 填充至保持输入与输出大小一致,需要 p = f − 1 2 p = \frac{f-1}{2} p=2f−1,避免输入在深层网络中由于卷积持续变小
-
stride: 步长
n + 2 p − f s + 1 \frac {n+2p-f}{s} + 1 sn+2p−f+1 -
Convolutions Over Volume: 多channel的卷积结果是2D的
n × n × n c ∗ f × f × n c → ( n − f + 1 ) × ( n − f + 1 ) × n c ’ n \times n \times n_c * f \times f \times n_c \rightarrow (n - f + 1) \times (n-f+1) \times n_c’ n×n×nc∗f×f×nc→(n−f+1)×(n−f+1)×nc’
其中 n c ’ n_c’ nc’是应用的filter的数量
3. 卷积神经网络的基本组织
3.1 One Layer of a Convolutional Network
设l是卷积层,约定:
f [ l ] f^{[l]} f[l] = filter size, each filter is: f [ l ] × f [ l ] × n c [ l − 1 ] f^{[l]} \times f^{[l]} \times n_c^{[l-1]} f[l]×f[l]×nc[l−1]
p [ l ] p^{[l]} p[l] = padding
s [ l ] s^{[l]} s[l] = stride
n H [ l − 1 ] × n W [ l − 1 ] × n c [ l − 1 ] n_H^{[l-1]} \times n_W^{[l-1]} \times n_c^{[l-1]} nH[l−1]×nW[l−1]×nc[l−1] = 当前卷积层的输入
那么,
-
卷积层的参数数量 ( f [ l ] × f [ l ] × n c [ l − 1 ] + 1 ) × n c [ l ] (f^{[l]} \times f^{[l]} \times n_c^{[l-1]} + 1) \times n_c^{[l]} (f[l]×f[l]×nc[l−1]+1)×nc[l]
-
卷积层的输出
n H [ l ] = ⌊ n H [ l − 1 ] + 2 p [ l ] − f [ l ] s [ l ] + 1 ⌋ n_H^{[l]} = \lfloor \frac {n_H^{[l-1]} + 2p^{[l]} - f^{[l]}} {s^{[l]}} + 1 \rfloor nH[l]=⌊s[l]nH[l−1]+2p[l]−f[l]+1⌋n W [ l ] = ⌊ n W [ l − 1 ] + 2 p [ l ] − f [ l ] s [ l ] + 1 ⌋ n_W^{[l]} = \lfloor \frac {n_W^{[l-1]} + 2p^{[l]} - f^{[l]}} {s^{[l]}} + 1 \rfloor nW[l]=⌊s[l]nW[l−1]+2p[l]−f[l]+1⌋
n c [ l ] = number of filters n_c^{[l]} = \text{number of filters} nc[l]=number of filters
-
Activation
- a [ l ] → n H [ l ] × n W [ l ] × n c [ l ] a^{[l]} \rightarrow n_H^{[l]} \times n_W^{[l]} \times n_c^{[l]} a[l]→nH[l]×nW[l]×nc[l]
- m个样本的向量化表示: A [ l ] → m × n H [ l ] × n W [ l ] × n c [ l ] A^{[l]} \rightarrow m \times n_H^{[l]} \times n_W^{[l]}\times n_c^{[l]} A[l]→m×nH[l]×nW[l]×nc[l]
多卷积层组合:一般地,逐层 n H ↓ n_H \downarrow nH↓, n c ↑ n_c \uparrow nc↑
3.2 网络组件
Convolutional Layer(CONV)
Pooling Layer (POOL)
Fully Connected Layer (FC)
Pooling layer中一般使用Max pooling:
- 对于每个channel,与卷积类似,但是只取区域最大值
- 输出的 n c n_c nc保持不变
It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input
3.3 一个用于图像分类的示例网络组合
CONV1 → \rightarrow → POOL1 → \rightarrow → CONV2 → \rightarrow → POOL2 → \rightarrow → FC3 → \rightarrow → FC4 → \rightarrow →Softmax
4. Why Convolutions?
Parameter sharing: A feature detector (such as a vertical edge detector) that’s usefull in one part of the image is probably useful in another part of the image.
- 对于
300
×
300
300 \times 300
300×300的RGB图片,如果:
- 直接连接到大小100的全连接层上,会有 ( 300 × 300 × 3 + 1 ) × 100 = 27 , 000 , 100 (300 \times 300 \times 3 + 1) \times 100 = 27,000,100 (300×300×3+1)×100=27,000,100个参数
- 连接到100个 5 × 5 5 \times 5 5×5的filter上,会有 ( 5 × 5 × 3 + 1 ) × 100 = 7600 (5 \times 5 \times 3 + 1) \times 100 = 7600 (5×5×3+1)×100=7600个参数
Sparsity of connections: In each layer, each output value depends only on a small number of input. (图像的像素之间只有局部相关性)