九、卷积神经网络
9.1 什么是卷积
卷积的定义公式:
卷积也可以理解为一种线性运算,图像处理中常见的mask运算都是卷积,广泛应用于图像滤波。 卷积关系最重要的一种情况,就是在信号与线性系统或数字信号处理中的卷积定理。
在神经网络中卷积就是卷积核在输入数据上“滑动”求积再求和
具体详细解释参考:
9.2 卷积神经网络
输入数据为多层(通道)的时候,卷积核的层数自动与输入层数(通道数)对齐,每一个卷积核对应一个卷积结果为输出。
卷积操作Conv2D
代码:
In [1]: import tensorflow as tf In [2]: from tensorflow. keras import layers
In [6]: layer=layers. Conv2D(4, kernel_size=5, strides=1, padding=' valid')
In [8]: out=layer(x)
Out[9]: Tensor Shape([1,28,28,4])
In [10]: layer=layers. Conv2D(4, kernel_size=5, strides=1, padding=' same')
In [11]: out=layer(x)
Out[12]: Tensor Shape([1,32,32,4])
In [13]: layer=layers. Conv2D(4, kernel_size=5, strides=2, padding=' same')
In [14]: out=layer(x)
Out[15]: TensorShape([1,16,16,4])
In [16]: layer. call(x). shape Out[16]: TensorShape([1,16,16,4])
weight & bias
In [13]: layer=layers. Conv2D(4, kernel_size=5, strides=2, padding=' same')
In [14]: out=layer(x)
Out[15]: TensorShape([1,16,16,4])
In [17]: layer. kernel
<tf. Variable ' conv2d_3/kernel:0' shape=(5,5,3,4) dtype=float32, numpy=
array([[[[-0.16160963,0.04107726,-0.09828208,-0.00601757],
[-0.02003701,0.01415607,-0.07604317,-0.12557343],
[-0.11157566,0.1328298,0.14624669,-0.04775226]],..
In [18]: layer. bias
Out[18]:<tf. Variable ' conv2d_3/bias:0' shape=(4,) dtype=float32, numpy=array([o.,
0.,0.,0.], dtype=float32)>
nn.conv2d
:
9.3 池化与采样
池化又名下采样
Max/Avg pooling
(最大池化 / 平均池化):
In[36]:× # TensorShape([1,14,14,])
# 第一种表达方式
In [37]: pool=layers. MaxPool2D(2, strides=2)
In [38]: out=pool(x)
0ut[39]: Tensor Shape([1,7,7,4])
In [40]: pool=layers. MaxPool2D(3, strides=2)
In [41]: out=pool(x)
0ut[42]: Tensor Shape([1,6,6,4])
# 第二种表达方式
In [44]: out=tf. nn. max_pool2d(x,2, strides=2, padding=' VALID')
Out[45]: TensorShape([1,7,7,4])
upsample
(上采样):
In [47]:x=tf. random. normal([1,7,7,4])
In [48]: layer=layers. UpSampling2D(size=3)
In [49]: out=layer(x)
Out[50]: TensorShape([1,21,21,4])
In [51]: layer=layers. UpSamp ling2D(size=2)
In [52]: out=layer(x)
Out[53]: TensorShape([1,14,14,4])