Neural Networks:Learning
Cost function
逻辑回归代价函数:
J
(
θ
)
=
−
1
m
∑
i
=
1
m
(
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
)
+
λ
2
m
∑
j
=
1
m
θ
j
2
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}\log(h_\theta(x^{(i)}))+(1-y^{(i)})\log(1-h_\theta(x^{(i)})))+\frac{\lambda}{2m}\sum_{j=1}^{m}\theta_j^2
J(θ)=−m1i=1∑m(y(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i))))+2mλj=1∑mθj2
神经网络代价函数:
J
(
θ
)
=
−
1
m
∑
i
=
1
m
∑
k
=
1
K
(
y
k
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
k
+
(
1
−
y
k
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
k
)
+
λ
2
m
∑
l
=
1
L
−
1
∑
j
=
1
m
∑
i
=
1
m
(
θ
j
i
l
)
2
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}(y_k^{(i)}\log(h_\theta(x^{(i)}))_k+(1-y_k^{(i)})\log(1-h_\theta(x^{(i)}))_k)+\frac{\lambda}{2m}\sum_{l=1}^{L-1}\sum_{j=1}^{m}\sum_{i=1}^{m}(\theta_{ji}^{l})^2
J(θ)=−m1i=1∑mk=1∑K(yk(i)log(hθ(x(i)))k+(1−yk(i))log(1−hθ(x(i)))k)+2mλl=1∑L−1j=1∑mi=1∑m(θjil)2
反向传播算法:Backpropagation algorithm
反向传播:
intution:
δ
j
(
l
)
\delta_j^{(l)}
δj(l)=“error” of node j in layer l.
计算:
δ
j
(
l
)
\delta_j^{(l)}
δj(l) =第 l层第 j个节点的误差(error);
对于每一个输出单元:
δ
j
(
4
)
=
a
j
(
4
)
−
y
j
\delta_j^{(4)}=a_j^{(4)}-y_j
δj(4)=aj(4)−yj ,
写成向量形式为:
δ
(
4
)
=
a
(
4
)
−
y
\delta^{(4)}=a^{(4)}-y
δ(4)=a(4)−y ;
由输出层逐级往上计算
δ
(
l
)
、
δ
(
l
−
1
)
…
δ
(
2
)
\delta^{(l)}、\delta^{(l-1)}\dots \delta^{(2)}
δ(l)、δ(l−1)…δ(2)
δ
(
3
)
=
(
Θ
(
3
)
)
T
δ
(
4
)
.
∗
g
′
(
z
(
3
)
)
,
g
′
(
z
(
3
)
)
=
a
(
3
)
.
∗
(
1
−
a
(
3
)
)
δ
(
2
)
=
(
Θ
(
2
)
)
T
δ
(
3
)
.
∗
g
′
(
z
(
2
)
)
,
g
′
(
z
(
2
)
)
=
a
(
2
)
.
∗
(
1
−
a
(
2
)
)
\delta^{(3)}=(\Theta^{(3)})^T\delta^{(4)}.*g\prime(z^{(3)}),\qquad g\prime(z^{(3)})=a^{(3)}.*(1-a^{(3)}) \\ \delta^{(2)}=(\Theta^{(2)})^T\delta^{(3)}.*g\prime(z^{(2)}),\qquad g\prime(z^{(2)})=a^{(2)}.*(1-a^{(2)})
δ(3)=(Θ(3))Tδ(4).∗g′(z(3)),g′(z(3))=a(3).∗(1−a(3))δ(2)=(Θ(2))Tδ(3).∗g′(z(2)),g′(z(2))=a(2).∗(1−a(2))
可以证明(忽略 λ ,即 λ = 0):
∂
∂
Θ
i
j
(
l
)
J
(
Θ
)
=
a
j
(
l
)
δ
i
(
l
+
1
)
\frac{\partial}{\partial\Theta_{ij}^{(l)}}J(\Theta)=a_j^{(l)}\delta_i^{(l+1)}
∂Θij(l)∂J(Θ)=aj(l)δi(l+1)
原文链接:https://blog.csdn.net/qq_29317617/article/details/86312154
理解反向传播算法:Backpropagation intitutio
具体过程:
向前传播:
换句话说:
δ
j
(
l
)
=
∂
∂
z
j
(
l
)
c
o
s
t
(
i
)
f
o
r
(
j
≥
0
)
\delta_j^{(l)}=\frac{\partial}{\partial z_{j}^{(l)}}cost(i) for(j\geq0)
δj(l)=∂zj(l)∂cost(i)for(j≥0)
where
c
o
s
t
(
i
)
=
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
cost(i)=y^{(i)}\log(h_\theta(x^{(i)}))+(1-y^{(i)})\log(1-h_\theta(x^{(i)}))
cost(i)=y(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))
δ项是代价函数关于这些中间项的偏导数,衡量影响神经网络的权值,进而影响神经网络的输出的程度。
展开参数:Implementation note:Unrolling parameters
梯度检验:Gradient checking
实现注意:
随机初始化:Random initialization
zero initialization:
After each update, parameters corresponding to inputs going into each oftwo hidden units are identical.
如果初始化为0,每次更新后,输入到两个隐藏单元中的输入对应的参数是相同的。
随机初始化:
组合到一起:Putting it together