《深度学习的数学》给予了我极大的启发,作者阐述的神经网络的思想和数学基础令我受益颇多,但是由于书中使用Excel作为示例向读者展示神经网络,这对我这样一个不精通Excel的人来说很头疼,因此我打算使用Python来实现书中的一个简单的卷积神经网络模型,即识别数字123的模型。
这个网络模型极其简单,比号称机器学习中的“Hello World”的手写数字识别模型更简单,它基本没有实用价值,但是我之所以推崇它,只因为它褪去了神经网络的复杂性,展示了神经网络中最基本、最根本的东西。
模型总共分为四层,第一层为输入层,第二层为卷积层,第三层为最大池化层,第四层为输出层。输入图像为96张66的单值二色图像。
卷积层的过滤器共包含39+3=30个参数,输出层包括12*3+3=39个参数,所以网络模型中总共69个参数。
原理和上篇文章一样,所涉及的数学知识有点改变。具体如下:
首先是卷积层神经单元的加权输入和神经单元的输出
z
i
j
F
k
=
w
11
F
k
x
i
j
+
w
12
F
k
x
i
j
+
1
+
w
13
F
k
x
i
j
+
2
+
w
21
F
k
x
i
+
1
j
+
w
22
F
k
x
i
+
1
j
+
1
+
w
23
F
k
x
i
+
1
j
+
2
+
w
31
F
k
x
i
+
2
j
+
w
32
F
k
x
i
+
2
j
+
1
+
w
33
F
k
x
i
+
2
j
+
2
+
b
F
k
z_{ij}^{Fk}=w_{11}^{Fk}x_{ij}+w_{12}^{Fk}x_{ij+1}+w_{13}^{Fk}x_{ij+2}+w_{21}^{Fk}x_{i+1j}+w_{22}^{Fk}x_{i+1j+1}+w_{23}^{Fk}x_{i+1j+2}+w_{31}^{Fk}x_{i+2j}+w_{32}^{Fk}x_{i+2j+1}+w_{33}^{Fk}x_{i+2j+2}+b^{Fk}
zijFk=w11Fkxij+w12Fkxij+1+w13Fkxij+2+w21Fkxi+1j+w22Fkxi+1j+1+w23Fkxi+1j+2+w31Fkxi+2j+w32Fkxi+2j+1+w33Fkxi+2j+2+bFk
a
i
j
F
k
=
a
(
z
i
j
F
k
)
,
其
中
k
为
卷
积
层
的
子
层
编
号
,
i
、
j
(
i
,
j
=
1
,
2
,
3
,
4
)
为
扫
描
的
起
始
行
列
编
号
,
a
(
z
)
表
示
激
活
函
数
a_{ij}^{Fk}=a(z_{ij}^{Fk}),其中k为卷积层的子层编号,i、j(i,j=1,2,3,4)为扫描的起始行列编号,a(z)表示激活函数
aijFk=a(zijFk),其中k为卷积层的子层编号,i、j(i,j=1,2,3,4)为扫描的起始行列编号,a(z)表示激活函数
然后是池化层的加权输入和神经单元的输出:
z
i
j
P
k
=
M
a
x
(
a
2
i
−
12
j
−
1
F
k
,
a
2
i
−
12
j
F
k
,
a
2
i
2
j
−
1
F
k
,
a
2
i
2
j
F
k
)
z_{ij}^{Pk}=Max(a_{2i-12j-1}^{Fk},a_{2i-12j}^{Fk},a_{2i2j-1}^{Fk},a_{2i2j}^{Fk})
zijPk=Max(a2i−12j−1Fk,a2i−12jFk,a2i2j−1Fk,a2i2jFk)
a
i
j
P
k
=
z
i
j
P
k
a_{ij}^{Pk}=z_{ij}^{Pk}
aijPk=zijPk
然后是输出层的加权输入和神经单元的输出:
z
n
O
=
w
1
−
11
O
n
a
11
P
1
+
w
1
−
12
O
n
a
12
P
1
+
w
1
−
21
O
n
a
21
P
1
+
w
1
−
22
O
n
a
22
P
1
+
w
2
−
11
O
n
a
11
P
2
+
w
2
−
12
O
n
a
12
P
2
+
w
2
−
21
O
n
a
21
P
2
+
w
2
−
22
O
n
a
22
P
2
+
w
3
−
11
O
n
a
11
P
3
+
w
3
−
12
O
n
a
12
P
3
+
w
3
−
21
O
n
a
21
P
3
+
w
3
−
22
O
n
a
22
P
3
+
b
n
O
z_n^O=w_{1-11}^{On}a_{11}^{P1}+w_{1-12}^{On}a_{12}^{P1}+w_{1-21}^{On}a_{21}^{P1}+w_{1-22}^{On}a_{22}^{P1}+w_{2-11}^{On}a_{11}^{P2}+w_{2-12}^{On}a_{12}^{P2}+w_{2-21}^{On}a_{21}^{P2}+w_{2-22}^{On}a_{22}^{P2}+w_{3-11}^{On}a_{11}^{P3}+w_{3-12}^{On}a_{12}^{P3}+w_{3-21}^{On}a_{21}^{P3}+w_{3-22}^{On}a_{22}^{P3}+b_n^O
znO=w1−11Ona11P1+w1−12Ona12P1+w1−21Ona21P1+w1−22Ona22P1+w2−11Ona11P2+w2−12Ona12P2+w2−21Ona21P2+w2−22Ona22P2+w3−11Ona11P3+w3−12Ona12P3+w3−21Ona21P3+w3−22Ona22P3+bnO
a
n
O
=
a
(
z
n
O
)
,
n
为
输
出
层
神
经
单
元
的
编
号
(
n
=
1
,
2
,
3
)
a_n^O=a(z_n^O),n为输出层神经单元的编号(n=1,2,3)
anO=a(znO),n为输出层神经单元的编号(n=1,2,3)
平
方
误
差
C
=
1
/
2
∗
(
(
t
1
−
a
1
O
)
2
+
t
2
−
a
2
O
)
2
+
t
3
−
a
3
O
)
2
)
平方误差C=1/2*((t_1-a_1^O)^2+t_2-a_2^O)^2+t_3-a_3^O)^2)
平方误差C=1/2∗((t1−a1O)2+t2−a2O)2+t3−a3O)2)
含义 | 图像为1 | 图像为2 | 图像为3 | |
---|---|---|---|---|
t 1 t_1 t1 | 1的正解变量 | 1 | 0 | 0 |
t 2 t_2 t2 | 2的正解变量 | 0 | 1 | 0 |
t 3 t_3 t3 | 3的正解变量 | 0 | 0 | 1 |
图像为1 | 图像为2 | 图像为3 | |
---|---|---|---|
a 1 O a_1^O a1O | 接近1的值 | 接近0的值 | 接近0的值 |
a 2 O a_2^O a2O | 接近0的值 | 接近1的值 | 接近0的值 |
a 3 O a_3^O a3O | 接近0的值 | 接近0的值 | 接近1的值 |
然后是误差反向传播法中的各层的神经单元误差δ:
输出层的神经单元误差计算:
δ
n
O
=
(
a
n
O
−
t
n
)
a
′
(
z
n
O
)
δ_n^O=(a_n^O-t_n)a'(z_n^O)
δnO=(anO−tn)a′(znO)
卷积层神经单元误差的反向递推关系式:
δ
i
j
F
k
=
(
δ
1
O
w
k
−
i
′
j
′
O
1
+
δ
2
O
w
k
−
i
′
j
′
O
2
+
δ
3
O
w
k
−
i
′
j
′
O
3
)
∗
(
当
a
i
j
F
k
在
区
块
中
最
大
时
为
1
,
否
则
为
0
)
∗
a
′
(
z
i
j
F
k
)
,
i
′
j
′
表
示
卷
积
层
i
行
j
列
的
神
经
单
元
连
接
的
池
化
层
神
经
单
元
的
位
置
δ_{ij}^{Fk}=(δ_1^Ow_{k-i'j'}^{O1}+δ_2^Ow_{k-i'j'}^{O2}+δ_3^Ow_{k-i'j'}^{O3})*(当a_{ij}^{Fk}在区块中最大时为1,否则为0)*a'(z_{ij}^{Fk}),i'j'表示卷积层i行j列的神经单元连接的池化层神经单元的位置
δijFk=(δ1Owk−i′j′O1+δ2Owk−i′j′O2+δ3Owk−i′j′O3)∗(当aijFk在区块中最大时为1,否则为0)∗a′(zijFk),i′j′表示卷积层i行j列的神经单元连接的池化层神经单元的位置
下面是平方误差关于过滤器的偏导数:(下面是像素为66、过滤器大小为33时的关系式)
卷积层:
关于权重:
∂
C
∂
w
i
j
F
k
=
δ
11
F
k
x
i
j
+
δ
12
F
k
x
i
j
+
1
+
.
.
.
+
δ
44
F
k
x
i
+
3
j
+
3
\frac{\partial C}{\partial w_{ij}^{Fk}}=δ_{11}^{Fk}x_{ij}+δ_{12}^{Fk}x_{ij+1}+...+δ_{44}^{Fk}x_{i+3j+3}
∂wijFk∂C=δ11Fkxij+δ12Fkxij+1+...+δ44Fkxi+3j+3
关于偏置:
∂
C
∂
b
F
k
=
δ
11
F
k
+
δ
12
F
k
+
.
.
.
+
δ
33
F
k
+
δ
44
F
k
\frac{\partial C}{\partial b^{Fk}}=δ_{11}^{Fk}+δ_{12}^{Fk}+...+δ_{33}^{Fk}+δ_{44}^{Fk}
∂bFk∂C=δ11Fk+δ12Fk+...+δ33Fk+δ44Fk
输出层:
∂
C
∂
w
k
−
i
j
O
n
=
δ
n
O
,
∂
C
∂
b
n
O
=
δ
n
O
\frac{\partial C}{\partial w_{k-ij}^{On}}=δ_n^O,\frac{\partial C}{\partial b_n^O}=δ_n^O
∂wk−ijOn∂C=δnO,∂bnO∂C=δnO
代价函数(损失函数)
C
T
=
∑
k
=
1
m
C
k
,
m
在
这
里
指
的
是
全
部
学
习
数
据
的
数
量
C_T=\sum_{k=1}^{m}C_k,m在这里指的是全部学习数据的数量
CT=∑k=1mCk,m在这里指的是全部学习数据的数量
梯度下降法基本式:
(
Δ
w
11
F
1
,
.
.
.
,
Δ
w
1
−
11
O
1
,
.
.
.
,
Δ
b
1
2
,
.
.
.
,
Δ
b
1
O
,
.
.
.
)
=
−
η
(
∂
C
T
∂
w
11
F
1
,
.
.
.
,
∂
C
T
∂
w
1
−
11
O
1
,
.
.
.
,
∂
C
T
∂
b
F
1
,
.
.
.
,
∂
C
T
∂
b
1
O
)
,
正
的
常
数
η
称
为
学
习
率
(Δ w_{11}^{F1},...,Δ w_{1-11}^{O1},...,Δ b_1^2,...,Δ b_1^O,...)=-η(\frac{\partial C_T}{\partial w_{11}^{F1}},...,\frac{\partial C_T}{\partial w_{1-11}^{O1}},...,\frac{\partial C_T}{\partial b^{F1}},...,\frac{\partial C_T}{\partial b_1^O}),正的常数η称为学习率
(Δw11F1,...,Δw1−11O1,...,Δb12,...,Δb1O,...)=−η(∂w11F1∂CT,...,∂w1−11O1∂CT,...,∂bF1∂CT,...,∂b1O∂CT),正的常数η称为学习率
新的位置:
(
w
11
F
1
+
Δ
w
11
F
1
,
.
.
.
,
w
1
−
11
O
1
+
Δ
w
1
−
11
O
1
,
.
.
.
,
b
F
1
+
Δ
b
1
2
,
.
.
.
,
b
1
O
+
Δ
b
1
O
,
.
.
.
)
(w_{11}^{F1}+Δ w_{11}^{F1},...,w_{1-11}^{O1}+Δ w_{1-11}^{O1},...,b^{F1}+Δ b_1^2,...,b_1^O+Δ b_1^O,...)
(w11F1+Δw11F1,...,w1−11O1+Δw1−11O1,...,bF1+Δb12,...,b1O+Δb1O,...)
基于上述的数学基础,编写了以下的Python程序。
注意:初始值和学习率对最终结果影响很大,随机的初始值并不一定合适,如果出现这种情况需要初始化。
import numpy as np
import math
#_inputs的形状为(96,6,6)
_inputs = np.array([[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,0,0,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,0,0]],
[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,1,0]],
[[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,1,0]],
[[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,1,1,1,0,0]],
[[0,1,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
#10
[[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,1,0]],
[[0,0,0,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0]],
[[0,0,1,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,1,0,0]],
[[0,0,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
[[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,0,0,0]],
[[0,0,1,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0]],
[[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
[[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,1,0,0]],
#20
[[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,0,1,0,0,0],[0,1,1,0,0,0],[0,1,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
[[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0]],
[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
[[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
[[0,1,0,0,0,0],[0,1,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
[[0,1,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,1,0,0],[0,0,0,0,0,0]],
[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,1,0,0,0,0],[0,0,0,0,0,0]],
[[0,0,0,0,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
[[0,0,0,0,0,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,0,0]],
#30
[[0,0,0,0,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,0,0,0,0],[0,1,0,0,0,0],[0,0,0,0,0,0]],
[[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,0,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
#40
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[1,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
#50
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[1,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
[[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,1,1,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,1,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,1,0,1,0],[0,1,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[1,1,1,1,1,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
#60
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
[[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,1,0,0],[1,1,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,1,1,0],[0,0,0,1,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,1,0]],
#70
[[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,0,1],[0,0,0,1,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,0,1],[0,0,0,1,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,0,1],[0,0,1,1,1,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,0,0,0,1,0],[0,1,1,1,0,0]],
[[0,1,1,1,0,0],[1,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
#80
[[0,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,0,1],[0,0,1,1,1,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[1,1,0,0,1,0],[0,0,1,1,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,1,1,1,0,0],[1,0,0,0,1,0],[0,0,1,1,1,0],[0,0,1,1,1,0],[0,0,0,0,1,0],[1,1,1,1,0,0]],
[[1,1,1,1,0,0],[0,0,0,0,1,0],[0,0,1,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,1],[0,1,1,1,1,0]],
#90
[[0,1,1,1,0,0],[0,1,1,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
[[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,1,1,0,1,0],[0,0,1,1,0,0]],
[[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,1,0]],
[[1,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
[[1,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[1,1,1,1,0,0]],
[[0,0,1,1,1,0],[0,1,0,0,1,1],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,1,0,0,1,1],[0,0,1,1,1,0]],])
#对应的标签
#shape = (96,3)
labels = np.array([[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
[1,0,0],[1,0,0],
[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
[0,1,0],[0,1,0],
[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
[0,0,1],[0,0,1]])
#激活函数
def a(x):
return 1.0 / (1 + math.exp(-x))
#激活函数的导数
def aa(x):
return a(x) * (1 - a(x))
#学习率n
n = 0.2
#初始化权重和偏置 神经网络总共69个参数
#初始化使用的是正态分布随机数
#卷积层
F = np.random.randn(3,3,3)
#输出层
O = np.random.randn(3, 3, 2, 2)
#6个偏置
FB = np.random.randn(3)
OB = np.random.randn(3)
#根据卷积层的i行j列返回神经单元连接的池化层神经单元的位置
def Pij(i,j):
x = 0 if i <= 1 else 1
y = 0 if j <= 1 else 1
return (x,y)
#处理单张图像
def proprecessing(_inputs,t):
#wi_j代表层i的第j个神经单元,SWi_j代表层i的第j个神经单元的平均误差对权重的偏导,SB代表平均误差对偏置的偏导,Ct为64张图像的平均误差的和
#最终要得到的权重和偏置保存在wi_j和bs中
global F,O,FB,OB,C,Wf,Wo,Bf,Bo
#求加权输入z
#36个分量
x = _inputs.flatten()
#卷积层的48个加权输入和48个神经单元输出
Z = np.zeros((3,4,4))
Fa = np.zeros((3,4,4))
for k in range(3):
w = F[k].flatten()
for i in range(4):
for j in range(4):
Z[k][i][j] = np.sum(w * _inputs[i:i + 3,j:j + 3].flatten()) + FB[k]
Fa[k][i][j] = a(Z[k][i][j])
#池化层的12个加权输入和12个神经单元输出
#池化层的输入和输出相等
Zp = np.zeros((3,2,2))
for k in range(3):
for i in range(2):
for j in range(2):
Zp[k][i][j] = np.max([Fa[k][2 * i][2 * j],Fa[k][2 * i][2 * j + 1],Fa[k][2 * i + 1][2 * j],Fa[k][2 * i + 1][2 * j + 1]])
#输出层3个加权输入和3个神经单元的输出
Zo = np.zeros((3,1))
Oa = np.zeros((3,1))
for k in range(3):
w = O[k].flatten()
Zo[k] = np.sum(w * Zp.flatten()) + OB[k]
Oa[k] = a(Zo[k])
#平均误差
C += 1.0 / 2 * ((t[0] - Oa[0]) ** 2 + (t[1] - Oa[1]) ** 2 + (t[2] - Oa[2]) ** 2)
#输出层的神经单元误差3个
Do = np.zeros((3,1))
for k in range(3):
Do[k] = (Oa[k] - t[k]) * aa(Zo[k])
#卷积层的神经单元误差48个
Df = np.zeros((3,4,4))
for k in range(3):
for i in range(4):
for j in range(4):
l = Pij(i,j)
zeroOrone = 1 if Fa[k][i][j] == Zp[k][l[0]][l[1]] else 0
Df[k][i][j] = np.sum(Do.flatten() * O[:,k,l[0],l[1]].flatten()) * zeroOrone * aa(Z[k][i][j])
#平方误差关于过滤器的偏导数27个
for k in range(3):
for i in range(3):
for j in range(3):
Wf[k][i][j] += np.sum(Df[k].flatten() * _inputs[i:i + 4,j:j + 4].flatten())
#偏置 3个
for k in range(3):
Bf[k] += np.sum(Df[k].flatten())
#平方误差关于输出层神经单元的权重的偏导数
for n in range(3):
for k in range(3):
for i in range(2):
for j in range(2):
Wo[n][k][i][j] += Do[n] * Zp[k][i][j]
#偏置3个
Bo += Do
#平方误差
for k in range(50):
C = 0.0
Wf = np.zeros((3,3,3))
Wo = np.zeros((3,3,2,2))
Bf = np.zeros((3,1))
Bo = Bf
for i in range(96):
proprecessing(_inputs[i],labels[i])
#更新权重和偏置
F+=(-1 * n * Wf)
O+=(-1 * n * Wo)
FB+=(-1 * n * Bf.flatten())
OB+=(-1 * n * Bo.flatten())
print('第{0}次神经网络的平均误差:{1}'.format(k + 1,C))
print(F.tolist())
print(O.tolist())
print(FB.tolist())
print(OB.tolist())
得到的权重结果就不贴了,测试程序:
import numpy as np
import math
#预设的测试图像为3
_inputs = np.array([[0,1,1,1,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,1,1,1,0,0]])
#激活函数
def a(x):
return 1.0 / (1 + math.exp(-x))
#激活函数的导数
def aa(x):
return a(x) * (1 - a(x))
#初始化权重和偏置 神经网络总共69个参数
#初始化使用的是正态分布随机数
#卷积层
F = np.array(空,此处的值为上面得到的)
##输出层
O = np.array(空,此处的值为上面得到的)
##6个偏置
FB = np.array(空,此处的值为上面得到的)
OB = np.array(空,此处的值为上面得到的)
#根据卷积层的i行j列返回神经单元连接的池化层神经单元的位置
def Pij(i,j):
x = 0 if i <= 1 else 1
y = 0 if j <= 1 else 1
return (x,y)
#处理单张图像
def getResult(_inputs):
#wi_j代表层i的第j个神经单元,SWi_j代表层i的第j个神经单元的平均误差对权重的偏导,SB代表平均误差对偏置的偏导,Ct为64张图像的平均误差的和
#最终要得到的权重和偏置保存在wi_j和bs中
global F,O,FB,OB,C,Wf,Wo,Bf,Bo
#求加权输入z
#36个分量
x = _inputs.flatten()
#卷积层的48个加权输入和48个神经单元输出
Z = np.zeros((3,4,4))
Fa = np.zeros((3,4,4))
for k in range(3):
w = F[k].flatten()
for i in range(4):
for j in range(4):
Z[k][i][j] = np.sum(w * _inputs[i:i + 3,j:j + 3].flatten()) + FB[k]
Fa[k][i][j] = a(Z[k][i][j])
#池化层的12个加权输入和12个神经单元输出
#池化层的输入和输出相等
Zp = np.zeros((3,2,2))
for k in range(3):
for i in range(2):
for j in range(2):
Zp[k][i][j] = np.max([Fa[k][2 * i][2 * j],Fa[k][2 * i][2 * j + 1],Fa[k][2 * i + 1][2 * j],Fa[k][2 * i + 1][2 * j + 1]])
#输出层3个加权输入和3个神经单元的输出
Zo = np.zeros((3,1))
Oa = np.zeros((3,1))
for k in range(3):
w = O[k].flatten()
Zo[k] = np.sum(w * Zp.flatten()) + OB[k]
Oa[k] = a(Zo[k])
print('图像中为1的概率为:{0},为2的概率为:{1},为3的概率为:{2}'.format(Oa[0],Oa[1],Oa[2]))
getResult(_inputs)
至此就结束了。