反向传播算法的数学推导
视频链接:[双语字幕]吴恩达深度学习deeplearning.ai_哔哩哔哩
本篇文章中将对于以下网络进行说明。
假设网络总共有三层,假设输入层有 n [ 0 ] n^{[0]} n[0]个神经元,隐藏层有 n [ 1 ] n^{[1]} n[1]个神经元,输出层有 n [ 2 ] n^{[2]} n[2]个神经元。
这里先假设 x x x为单样本,待推出结果后我们再进行向量化的推导。
这里对各个参数的维数进行说明,方便后面再推导过程中的理解:
参数 | 维数 |
---|---|
x x x | ( n [ 0 ] , 1 ) (n^{[0]},1) (n[0],1) |
W [ 1 ] W^{[1]} W[1] | ( n [ 1 ] , n [ 0 ] ) (n^{[1]},n^{[0]}) (n[1],n[0]) |
b [ 1 ] b^{[1]} b[1] | ( n [ 1 ] , 1 ) (n^{[1]},1) (n[1],1) |
z [ 1 ] z^{[1]} z[1] | ( n [ 1 ] , 1 ) (n^{[1]},1) (n[1],1) |
a [ 1 ] a^{[1]} a[1] | ( n [ 1 ] , 1 ) (n^{[1]},1) (n[1],1) |
W [ 2 ] W^{[2]} W[2] | ( n [ 2 ] , n [ 1 ] ) (n^{[2]},n^{[1]}) (n[2],n[1]) |
b [ 2 ] b^{[2]} b[2] | ( n [ 2 ] , 1 ) (n^{[2]},1) (n[2],1) |
z [ 2 ] z^{[2]} z[2] | ( n [ 2 ] , 1 ) (n^{[2]},1) (n[2],1) |
a [ 2 ] a^{[2]} a[2] | ( n [ 2 ] , 1 ) (n^{[2]},1) (n[2],1) |
在这里,我们假设这是一个二分类问题,那么损失函数可以取 L ( a , y ) = − y log ( a ) − ( 1 − y ) l o g ( 1 − a ) L(a,y)=-y\log(a)-(1-y)log(1-a) L(a,y)=−ylog(a)−(1−y)log(1−a)
反向传播算法即是损失函数对于模型中的权重矩阵和偏置的偏导数,我们可以使用链式法则一步一步去求导。
首先我们需要先求出来两个最基本的偏导数:
∂
L
∂
a
=
−
y
a
+
1
−
y
1
−
a
=
a
−
y
a
(
1
−
a
)
(1)
\begin{aligned} \frac {\partial L} {\partial a} &= - \frac{y}{a} + \frac{1-y}{1-a}\\ &=\frac{a-y}{a(1-a)} \end{aligned} \tag{1}
∂a∂L=−ay+1−a1−y=a(1−a)a−y(1)
σ ′ = e − z ( 1 + e − z ) 2 = 1 1 + e − z × e − z 1 + e − z = a ( 1 − a ) (2) \begin{aligned} \sigma ' &=\frac{e^{-z}}{(1+e^{-z})^2}\\ &=\frac {1}{1+e^{-z}} \times \frac{e^{-z}}{1+e^{-z}}\\ &=a(1-a) \end{aligned} \tag{2} σ′=(1+e−z)2e−z=1+e−z1×1+e−ze−z=a(1−a)(2)
有了这两个最基本的偏导数之后,我们就可以使用链式法则依次对各个参数求偏导:
在这里我们约定符号 d a da da即代表着损失函数 L L L对于参数 a a a的偏导数,以简化我们的公式表示。
在下面的推导中
- ⋅ \cdot ⋅ 代表点乘,即正常的矩阵乘法
- ∗ * ∗ 代表按元素乘,即对应位置的元素进行乘法
d a [ 2 ] = ∂ L ∂ a [ 2 ] = a [ 2 ] − y a [ 2 ] ( 1 − a [ 2 ] ) ( n [ 2 ] , 1 ) d z [ 2 ] = ∂ L ∂ z [ 2 ] = ∂ L ∂ a [ 2 ] ⋅ d a [ 2 ] d z = d a [ 2 ] ⋅ σ ′ = a [ 2 ] − y ( n [ 2 ] , 1 ) d W [ 2 ] = ∂ L ∂ W [ 2 ] = ∂ L ∂ z [ 2 ] ⋅ ∂ z [ 2 ] ∂ W [ 2 ] = d z [ 2 ] ⋅ a [ 1 ] T ( n [ 2 ] , 1 ) ⋅ ( n [ 1 ] , 1 ) T = ( n [ 2 ] , n [ 1 ] ) d b [ 2 ] = ∂ L ∂ b [ 2 ] = ∂ L ∂ z [ 2 ] ⋅ ∂ z [ 2 ] ∂ b [ 2 ] = d z [ 2 ] ( n [ 2 ] , 1 ) d a [ 1 ] = ∂ L ∂ a [ 1 ] = ∂ L ∂ z [ 2 ] ⋅ d z [ 2 ] d a [ 1 ] = W [ 2 ] T ⋅ d z [ 2 ] ( n [ 2 ] , n [ 1 ] ) T ⋅ ( n [ 2 ] , 1 ) = ( n [ 1 ] , 1 ) d z [ 1 ] = ∂ L ∂ z [ 1 ] = ∂ L ∂ a [ 1 ] ⋅ d a [ 1 ] d z [ 1 ] = d a [ 1 ] ⋅ σ ′ ( z [ 1 ] ) = W [ 2 ] T ⋅ d z [ 2 ] ∗ σ ′ ( z [ 1 ] ) ( n [ 1 ] , 1 ) ∗ ( n [ 1 ] , 1 ) = ( n [ 1 ] , 1 ) d W [ 1 ] = ∂ L ∂ W [ 1 ] = ∂ L ∂ z [ 1 ] ⋅ d z [ 1 ] d W [ 1 ] = d z [ 1 ] ⋅ x T ( n [ 1 ] , 1 ) ⋅ ( n [ 0 ] , 1 ) T = ( n [ 1 ] , n [ 0 ] ) d b [ 1 ] = ∂ L ∂ b [ 1 ] = ∂ L ∂ z [ 1 ] ⋅ d z [ 1 ] d b [ 1 ] = d z [ 1 ] ( n [ 1 ] , 1 ) \begin{aligned} &da^{[2]}=\frac{\partial L}{\partial a^{[2]}}=\frac{a^{[2]}-y}{a^{[2]}(1-a^{[2]})} \qquad & (n^{[2]},1) \\ &dz^{[2]}=\frac {\partial L}{\partial z^{[2]}}=\frac{\partial L}{\partial a^{[2]}}\cdot \frac{da^{[2]}}{dz}=da^{[2]}\cdot \sigma' = a^{[2]}-y \qquad & (n^{[2]},1) \\ &dW^{[2]}=\frac{\partial L}{\partial W^{[2]}}=\frac{\partial L}{\partial z^{[2]}}\cdot \frac{\partial z^{[2]}}{\partial W^{[2]}} = dz^{[2]}\cdot a^{[1]^T} \qquad &(n^{[2]},1) \cdot (n^{[1]},1)^T=(n^{[2]},n^{[1]}) \\ &db^{[2]}=\frac{\partial L}{\partial b^{[2]}}=\frac{\partial L}{\partial z^{[2]}}\cdot \frac{\partial z^{[2]}}{\partial b^{[2]}} = dz^{[2]} \qquad &(n^{[2]},1) \\ &da^{[1]}=\frac {\partial L}{\partial a^{[1]}}=\frac{\partial L}{\partial z^{[2]}}\cdot \frac{dz^{[2]}}{da^{[1]}}=W^{[2]^T}\cdot dz^{[2]} \qquad &(n^{[2]},n^{[1]})^T\cdot (n^{[2]},1)= (n^{[1]},1) \\ &dz^{[1]}=\frac {\partial L}{\partial z^{[1]}} = \frac{\partial L}{\partial a^{[1]}}\cdot \frac{da^{[1]}}{dz^{[1]}}=da^{[1]}\cdot \sigma'(z^{[1]}) = W^{[2]^T}\cdot dz^{[2]} * \sigma'(z^{[1]}) \qquad &(n^{[1]},1) * (n^{[1]},1)= (n^{[1]},1) \\ &dW^{[1]}=\frac {\partial L}{\partial W^{[1]}} = \frac{\partial L}{\partial z^{[1]}}\cdot \frac{dz^{[1]}}{dW^{[1]}}=dz^{[1]}\cdot x^T \qquad & (n^{[1]},1) \cdot (n^{[0]},1)^T=(n^{[1]},n^{[0]}) \\ &db^{[1]}=\frac {\partial L}{\partial b^{[1]}} = \frac{\partial L}{\partial z^{[1]}}\cdot \frac{dz^{[1]}}{db^{[1]}}=dz^{[1]} \qquad & (n^{[1]},1)\\ \end{aligned} da[2]=∂a[2]∂L=a[2](1−a[2])a[2]−ydz[2]=∂z[2]∂L=∂a[2]∂L⋅dzda[2]=da[2]⋅σ′=a[2]−ydW[2]=∂W[2]∂L=∂z[2]∂L⋅∂W[2]∂z[2]=dz[2]⋅a[1]Tdb[2]=∂b[2]∂L=∂z[2]∂L⋅∂b[2]∂z[2]=dz[2]da[1]=∂a[1]∂L=∂z[2]∂L⋅da[1]dz[2]=W[2]T⋅dz[2]dz[1]=∂z[1]∂L=∂a[1]∂L⋅dz[1]da[1]=da[1]⋅σ′(z[1])=W[2]T⋅dz[2]∗σ′(z[1])dW[1]=∂W[1]∂L=∂z[1]∂L⋅dW[1]dz[1]=dz[1]⋅xTdb[1]=∂b[1]∂L=∂z[1]∂L⋅db[1]dz[1]=dz[1](n[2],1)(n[2],1)(n[2],1)⋅(n[1],1)T=(n[2],n[1])(n[2],1)(n[2],n[1])T⋅(n[2],1)=(n[1],1)(n[1],1)∗(n[1],1)=(n[1],1)(n[1],1)⋅(n[0],1)T=(n[1],n[0])(n[1],1)
上面这个过程建议如果有微积分基础的话自己推导一下,同时在推导的过程中要把维数带着进行验证,看最后的维度是不是这里需要的维度。
例如对于 d W [ 1 ] dW^{[1]} dW[1]和 d W [ 2 ] dW^{[2]} dW[2]而言,后面乘的是 a [ 1 ] T a^{[1]^T} a[1]T而不是 a [ 1 ] a^{[1]} a[1],就是为了满足 W [ 1 ] , W [ 2 ] W^{[1]},W^{[2]} W[1],W[2]的维度
接下来展示一下向量化的表示方法,也就是在深度学习代码中用到的计算方式:
其中
np.sum(dZ^{[2]},axis=1,keepdims=True)
是python中numpy库中的求和函数
d Z [ 2 ] = A [ 2 ] − Y d W [ 2 ] = 1 m d Z [ 2 ] A [ 1 ] T d b [ 2 ] = 1 m ∗ n p . s u m ( d Z [ 2 ] , a x i s = 1 , k e e p d i m s = T r u e ) d Z [ 1 ] = W [ 2 ] T d Z [ 2 ] ∗ g [ 1 ] ′ ( Z [ 1 ] ) d W [ 1 ] = 1 m d Z [ 1 ] X T d b [ 1 ] = 1 m ∗ n p . s u m ( d Z [ 1 ] , a x i s = 1 , k e e p d i m s = T r u e ) \begin{aligned} &dZ^{[2]}=A^{[2]}-Y\\ &dW^{[2]}=\frac {1}{m}dZ^{[2]}A^{[1]^T}\\ &db^{[2]}=\frac {1}{m}*np.sum(dZ^{[2]},axis=1,keepdims=True)\\ &dZ^{[1]}=W^{[2]^T}dZ^{[2]}*g^{[1]'}(Z^{[1]})\\ &dW^{[1]}=\frac{1}{m}dZ^{[1]}X^T\\ &db^{[1]}=\frac {1}{m}*np.sum(dZ^{[1]},axis=1,keepdims=True)\\ \end{aligned} dZ[2]=A[2]−YdW[2]=m1dZ[2]A[1]Tdb[2]=m1∗np.sum(dZ[2],axis=1,keepdims=True)dZ[1]=W[2]TdZ[2]∗g[1]′(Z[1])dW[1]=m1dZ[1]XTdb[1]=m1∗np.sum(dZ[1],axis=1,keepdims=True)