机器学习greenhand的关于RNN原理的非严谨性的数学推导(3)

机器学习greenhand的关于RNN原理的非严谨性的数学推导(3)

—— SKYWALKER2099@CSDN 20230410

Before everything:
当我想要真正的了解rnn的原理,手写一遍当然是最好的了。但是网上大多数的纯numpy推导代码要不缺乏相应的完善的阐述,要不就是讲述了数学原理仿佛了解一二,但是实际上更不不知道如何按照他去实现手写的底层原理。
在查询资料学习的过程中,找到了rnn_lstm_from_scratch这一份代码,似乎很详细,但是类似的他的数学原理我还是不太能理解。【note:这份代码里面V,W和大多数图上,以及本文表示的意义是反的,需要注意看】所以在查询各种资料的过程中试图结合各种思路,把他的数学原理阐述明白。
由此下文主要以不严谨的方式阐述了我关于rnn的基本数学逻辑,或者说他的底层运行逻辑,展现出来。(文中的数学符号并不太专业。更加偏向于一个思路)
————
在这里插入图片描述

损失函数为(不考虑正则化项):
L = ∑ t = 1 T L ( t ) (1) L = \sum_{t=1}^TL^{(t)} \tag{1} L=t=1TL(t)(1)
L ( t ) = − Σ i = 1 C p i l o g ( q i ) (2) L^{(t)} = -\Sigma_{i=1}^C p_i log(q_i) \tag{2} L(t)=Σi=1Cpilog(qi)(2)
其中C代表类别数。 p i p_i pi为真实, q i q_i qi为预测.
比如:
TRUE:[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],PRED:[0.1, 0.6, 0.3, 0, 0, 0, 0, 0, 0, 0]
则交叉熵为: −ln(0.6)≈0.51

二. backword pass

2.尝试求 ∂ L ∂ W \partial L \over \partial W WL

参考资料链接1(个人感觉有点错误)【浅析循环神经网络(RNN)的反向求导过程】
参考资料链接2【浅析循环神经网络(RNN)的反向求导过程】
参考资料链接3【RNN前向传播、反向传播与并行计算(非常详细)】
参考资料链接4【Recurrent Neural Networks Tutorial, Part 3 – Backpropagation Through Time and Vanishing Gradients】
参考资料链接5,最对编程有用的思路【RNN的反向传播推导与numpy实现】
W是回连到隐藏层的反馈矩阵,先回顾一下几个公式:
H O U T ( t ) = f ( U x ( t ) + W s ( t − 1 ) ) = f ( U x ( t ) + W H O U T ( t − 1 ) ) , 这里的激活函数 f 是 t a n h HOUT^{(t)}=f(Ux^{(t)}+Ws^{(t-1)})=f(Ux^{(t)}+WHOUT^{(t-1)}),这里的激活函数f是tanh HOUT(t)=f(Ux(t)+Ws(t1))=f(Ux(t)+WHOUT(t1)),这里的激活函数ftanh
H I N W ( t ) = W ∗ H O U T ( t − 1 ) (3.9) HINW^{(t)} =W*HOUT^{(t-1)} \tag{3.9} HINW(t)=WHOUT(t1)(3.9)

∂ L ∂ W = ∂ ∑ t L ( t ) ∂ W = 交换求和和偏分顺序 = ∑ t = 1 T = 4 ∂ L ( t ) ∂ a ( t ) ∂ a ( t ) ∂ H O U T ( t ) ∂ H O U T ( t ) ∂ W (46) {\partial L \over \partial W} = {\partial \sum_t L^{(t)}\over \partial W} =交换求和和偏分顺序 = \sum_{t=1}^{T=4}{\partial L^{(t)}\over \partial a^{(t)}}{\partial a^{(t)} \over \partial HOUT^{(t)}}{\partial HOUT^{(t)} \over \partial W} \tag{46} WL=WtL(t)=交换求和和偏分顺序=t=1T=4a(t)L(t)HOUT(t)a(t)WHOUT(t)(46)
而:
H O U T ( t ) HOUT^{(t)} HOUT(t)
= f ( U x ( t ) + W H O U T ( t − 1 ) ) =f(Ux^{(t)}+WHOUT^{(t-1)}) =f(Ux(t)+WHOUT(t1))
= f ( U x ( t ) + W [ f ( U x ( t − 1 ) + W [ f ( U x ( t − 2 ) + W [ . . [ . . . [ f ( U x ( t ) + W H O U T ( 0 ) ) ] ) ] ] ] ) ] ) (47) =f(Ux^{(t)}+W[f(Ux^{(t-1)}+W[f(Ux^{(t-2)}+W[..[...[f(Ux^{(t)}+WHOUT^{(0)})])]]])]) \tag{47} =f(Ux(t)+W[f(Ux(t1)+W[f(Ux(t2)+W[..[...[f(Ux(t)+WHOUT(0))])]]])])(47)
H O U T ( t ) HOUT^{(t)} HOUT(t)可以表示为:
H O U T ( t ) = H O U T ( t ) ( H O U T ( t ) , H O U T ( t − 1 ) , H O U T ( t − 2 ) , H O U T ( t − 3 ) . . . , H O U T ( 1 ) ) (48) HOUT^{(t)}=HOUT^{(t)}(HOUT^{(t)},HOUT^{(t-1)},HOUT^{(t-2)},HOUT^{(t-3)}...,HOUT^{(1)})\tag{48} HOUT(t)=HOUT(t)(HOUT(t),HOUT(t1),HOUT(t2),HOUT(t3)...,HOUT(1))(48)
是一个关于多个变量的函数
那么这里的最后一项就涉及到所有的隐藏状态.
所以由链式法则可以分开写(这里个人认为是参考资料1公式(8)错误的来源):
参考资料链接 1 写为 : 参考资料链接1写为: 参考资料链接1写为:
∂ H O U T ( t ) ∂ W = ∑ k = 1 t ∂ H O U T ( t ) ∂ H O U T ( k ) ∂ H O U T ( k ) ∂ W (49.1) {\partial HOUT^{(t)} \over \partial W} =\sum_{k=1}^t{\partial HOUT^{(t)} \over \partial HOUT^{(k)}}{\partial HOUT^{(k)} \over \partial W} \tag{49.1} WHOUT(t)=k=1tHOUT(k)HOUT(t)WHOUT(k)(49.1)

∂ H O U T ( t ) ∂ H O U T ( k ) = ∂ H O U T ( t ) ∂ H O U T ( t − 1 ) ∂ H O U T ( t − 1 ) ∂ H O U T ( t − 2 ) . . . ∂ H O U T ( k + 1 ) ∂ H O U T ( t ) (49.1) {\partial HOUT^{(t)} \over \partial HOUT^{(k)}} = {\partial HOUT^{(t)} \over \partial HOUT^{(t-1)}} {\partial HOUT^{(t-1)} \over \partial HOUT^{(t-2)}}... {\partial HOUT^{(k+1)} \over \partial HOUT^{(t)}} \tag{49.1} HOUT(k)HOUT(t)=HOUT(t1)HOUT(t)HOUT(t2)HOUT(t1)...HOUT(t)HOUT(k+1)(49.1)
个人认为为 : 个人认为为: 个人认为为:
∂ H O U T ( t ) ∂ W = ∂ f ( U x ( t ) + W H O U T ( t − 1 ) ) ∂ W [ f 里面的就是 H I N ( t ) ] (49.2) {\partial HOUT^{(t)} \over \partial W} ={\partial f(Ux^{(t)}+WHOUT^{(t-1)}) \over \partial W} [f里面的就是HIN^{(t)}]\tag{49.2} WHOUT(t)=Wf(Ux(t)+WHOUT(t1))[f里面的就是HIN(t)](49.2)
= ∂ f ( U x ( t ) + W f ( U x ( t − 1 ) + W H O U T ( t − 2 ) ) ) ∂ W = {\partial f(Ux^{(t)}+Wf(Ux^{(t-1)}+WHOUT^{(t-2)})) \over \partial W} =Wf(Ux(t)+Wf(Ux(t1)+WHOUT(t2)))
. . . . . . ...... ......
= ∂ f ( U x ( t ) + W f ( U x ( t − 1 ) + W ( f ( . . . f ( U x ( 1 ) + W H O U T ( 0 [ 也就是初始态 ] ) ) ) ) ) ) ) ∂ W = {\partial f(Ux^{(t)}+Wf(Ux^{(t-1)}+W(f(...f(Ux^{(1)}+WHOUT^{(0[也就是初始态])})))))) \over \partial W} =Wf(Ux(t)+Wf(Ux(t1)+W(f(...f(Ux(1)+WHOUT(0[也就是初始态])))))))
这里看起来非常的复杂 这里看起来非常的复杂 这里看起来非常的复杂

= ∂ H O U T ( t ) ∂ H O U T ( t ) ∂ H O U T ( t ) ∂ W + ∂ H O U T ( t ) ∂ H O U T ( t − 1 ) ∂ H O U T ( t − 1 ) ∂ W + ∂ H O U T ( t ) ∂ H O U T ( t − 2 ) ∂ H O U T ( t − 2 ) ∂ W + . . . . + ∂ H O U T ( t ) ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ W = {\partial HOUT^{(t)} \over \partial HOUT^{(t)}}{\partial HOUT^{(t)} \over \partial W} + {\partial HOUT^{(t)} \over \partial HOUT^{(t-1)}}{\partial HOUT^{(t-1)} \over \partial W} + {\partial HOUT^{(t)} \over \partial HOUT^{(t-2)}}{\partial HOUT^{(t-2)} \over \partial W} + .... + {\partial HOUT^{(t)} \over \partial HOUT^{(1)}}{\partial HOUT^{(1)} \over \partial W} =HOUT(t)HOUT(t)WHOUT(t)+HOUT(t1)HOUT(t)WHOUT(t1)+HOUT(t2)HOUT(t)WHOUT(t2)+....+HOUT(1)HOUT(t)WHOUT(1)

即: ∑ k = 1 t ∂ H O U T ( t ) ∂ H O U T ( k ) ∂ H O U T ( k ) ∂ W (49.2) 即:\sum_{k=1}^{t} {\partial HOUT^{(t)} \over \partial HOUT^{(k)}} {\partial HOUT^{(k)} \over \partial W} \tag{49.2} 即:k=1tHOUT(k)HOUT(t)WHOUT(k)(49.2)

综上有:
∂ H O U T ( t ) ∂ W = ∑ k = 1 t ∂ H O U T ( t ) ∂ H O U T ( k ) ∂ H O U T ( k ) ∂ W (50) {\partial HOUT^{(t)} \over \partial W} =\sum_{k=1}^{t} {\partial HOUT^{(t)} \over \partial HOUT^{(k)}} {\partial HOUT^{(k)} \over \partial W} \tag{50} WHOUT(t)=k=1tHOUT(k)HOUT(t)WHOUT(k)(50)
将(50)代入(46)得到:
∂ L ∂ W = ∂ ∑ t L ( t ) ∂ W = 交换求和和偏分顺序 = ∑ t = 1 T = 4 ∂ L ( t ) ∂ a ( t ) ∂ a ( t ) ∂ H O U T ( t ) ∂ H O U T ( t ) ∂ W {\partial L \over \partial W} = {\partial \sum_t L^{(t)}\over \partial W} =交换求和和偏分顺序 = \sum_{t=1}^{T=4}{\partial L^{(t)}\over \partial a^{(t)}}{\partial a^{(t)} \over \partial HOUT^{(t)}}{\partial HOUT^{(t)} \over \partial W} WL=WtL(t)=交换求和和偏分顺序=t=1T=4a(t)L(t)HOUT(t)a(t)WHOUT(t)
= ∑ t = 1 T = 4 ∂ L ( t ) ∂ a ( t ) ∂ a ( t ) ∂ H O U T ( t ) ( ∑ k = 1 t ∂ H O U T ( t ) ∂ H O U T ( k ) ∂ H O U T ( k ) ∂ W ) (51) = \sum_{t=1}^{T=4}{\partial L^{(t)}\over \partial a^{(t)}}{\partial a^{(t)} \over \partial HOUT^{(t)}}(\sum_{k=1}^{t} {\partial HOUT^{(t)} \over \partial HOUT^{(k)}} {\partial HOUT^{(k)} \over \partial W}) \tag{51} =t=1T=4a(t)L(t)HOUT(t)a(t)(k=1tHOUT(k)HOUT(t)WHOUT(k))(51)
第一个因子已经在之前计算出来了.
第二个因子由(4.1):
a ( t ) = V ∗ H O U T ( t ) a^{(t)}=V * HOUT^{(t)} a(t)=VHOUT(t)
以及求导公式得到:
∂ a ( t ) ∂ H O U T ( t ) = V (52) {\partial a^{(t)} \over \partial HOUT^{(t)}} = V \tag{52} HOUT(t)a(t)=V(52)

后面的这个式子非常的复杂,所以尝试把他分解开来看看有什么规律:
假设t从1开始:
t = 1 时 : t=1时: t=1:
= ∂ L ( 1 ) ∂ a ( 1 ) ∗ V ∗ { ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ W } = 由于初始态 H O U T ( 1 ) 为 0 , 所以 = [ 0 ] = {\partial L^{(1)}\over \partial a^{(1)}}*V*\{{\partial HOUT^{(1)} \over \partial HOUT^{(1)}}{\partial HOUT^{(1)} \over \partial W}\} =由于初始态HOUT^{(1)}为0,所以 = [0] =a(1)L(1)V{HOUT(1)HOUT(1)WHOUT(1)}=由于初始态HOUT(1)0,所以=[0]
t = 2 时 : t=2时: t=2:
= ∂ L ( 2 ) ∂ a ( 2 ) ∗ V ∗ { ∂ H O U T ( 2 ) ∂ H O U T ( 2 ) ∂ H O U T ( 2 ) ∂ W + ∂ H O U T ( 2 ) ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ W } = {\partial L^{(2)}\over \partial a^{(2)}}*V*\{{\partial HOUT^{(2)} \over \partial HOUT^{(2)}}{\partial HOUT^{(2)} \over \partial W} + {\partial HOUT^{(2)} \over \partial HOUT^{(1)}}{\partial HOUT^{(1)} \over \partial W} \} =a(2)L(2)V{HOUT(2)HOUT(2)WHOUT(2)+HOUT(1)HOUT(2)WHOUT(1)}
t = 3 时 : t=3时: t=3:
= ∂ L ( 3 ) ∂ a ( 3 ) ∗ V ∗ { ∂ H O U T ( 3 ) ∂ H O U T ( 3 ) ∂ H O U T ( 3 ) ∂ W + ∂ H O U T ( 3 ) ∂ H O U T ( 2 ) ∂ H O U T ( 2 ) ∂ W + ∂ H O U T ( 3 ) ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ W } = {\partial L^{(3)}\over \partial a^{(3)}}*V*\{{\partial HOUT^{(3)} \over \partial HOUT^{(3)}}{\partial HOUT^{(3)} \over \partial W} + {\partial HOUT^{(3)} \over \partial HOUT^{(2)}}{\partial HOUT^{(2)} \over \partial W} + {\partial HOUT^{(3)} \over \partial HOUT^{(1)}}{\partial HOUT^{(1)} \over \partial W} \} =a(3)L(3)V{HOUT(3)HOUT(3)WHOUT(3)+HOUT(2)HOUT(3)WHOUT(2)+HOUT(1)HOUT(3)WHOUT(1)}
t = 4 时 ( 这里就是这个例子的最后一个时刻 , 在范围变成 ( 0 − 3 ) 的时候就是 t = 3 是最后一个时刻 ) : t=4时(这里就是这个例子的最后一个时刻,在范围变成(0-3)的时候就是t=3是最后一个时刻): t=4(这里就是这个例子的最后一个时刻,在范围变成(03)的时候就是t=3是最后一个时刻):
= ∂ L ( 4 ) ∂ a ( 4 ) ∗ V ∗ { ∂ H O U T ( 4 ) ∂ H O U T ( 4 ) ∂ H O U T ( 4 ) ∂ W + ∂ H O U T ( 4 ) ∂ H O U T ( 3 ) ∂ H O U T ( 3 ) ∂ W + ∂ H O U T ( 4 ) ∂ H O U T ( 2 ) ∂ H O U T ( 2 ) ∂ W + ∂ H O U T ( 4 ) ∂ H O U T ( 1 ) ∂ H O U T ( 1 ) ∂ W } = {\partial L^{(4)}\over \partial a^{(4)}}*V*\{{\partial HOUT^{(4)} \over \partial HOUT^{(4)}}{\partial HOUT^{(4)} \over \partial W} +{\partial HOUT^{(4)} \over \partial HOUT^{(3)}}{\partial HOUT^{(3)} \over \partial W}+{\partial HOUT^{(4)} \over \partial HOUT^{(2)}}{\partial HOUT^{(2)} \over \partial W}+{\partial HOUT^{(4)} \over \partial HOUT^{(1)}}{\partial HOUT^{(1)} \over \partial W}\} =a(4)L(4)V{HOUT(4)HOUT(4)WHOUT(4)+HOUT(3)HOUT(4)WHOUT(3)+HOUT(2)HOUT(4)WHOUT(2)+HOUT(1)HOUT(4)WHOUT(1)}
最后的结果就是这四个式子的和

H O U T ( t ) HOUT^{(t)} HOUT(t)
= f ( U x ( t ) + W H O U T ( t − 1 ) ) =f(Ux^{(t)}+WHOUT^{(t-1)}) =f(Ux(t)+WHOUT(t1))
以及:
t a n h ′ ( x ) = 1 − t a n h 2 ( x ) tanh'(x) = 1-tanh^{2}(x) tanh(x)=1tanh2(x)
∂ H O U T ( t ) ∂ H O U T ( t − 1 ) = ( 1 − ( H O U T ( t ) ) 2 ) ∗ ( W ) {\partial HOUT^{(t)} \over \partial HOUT^{(t-1)}} = (1 - (HOUT^{(t)})^2)*(W) HOUT(t1)HOUT(t)=(1(HOUT(t))2)(W)
∂ H O U T ( t ) ∂ W = ( 1 − ( H O U T ( t ) ) 2 ) ∗ ( ( H O U T ( t − 1 ) ) ) {\partial HOUT^{(t)} \over \partial W} = (1 - (HOUT^{(t)})^2)*((HOUT^{(t-1)})) WHOUT(t)=(1(HOUT(t))2)((HOUT(t1)))
我们试图去求解上面的式子,首先发现太复杂了,根本不实际,而且其中每一个 ∂ H O U T ( t ) ∂ H O U T ( k ) {\partial HOUT^{(t)} \over \partial HOUT^{(k)}} HOUT(k)HOUT(t)又形成了一组链式法则,也就是 ∏ i = k + 1 t ∂ H O U T ( i ) ∂ H O U T ( i − 1 ) \prod_{i=k+1}^{t}{\partial HOUT^{(i)} \over \partial HOUT^{(i-1)}} i=k+1tHOUT(i1)HOUT(i)(也就是所谓梯度消失梯度爆炸的地方),但是这种复杂的东西对我们编程理解他的原理并没什么用,我们必须换一种角度来处理这个问题。
原来的思路是如同(46):
∂ L ∂ W = ∂ ∑ t L ( t ) ∂ W = ∑ t = 1 T = 4 ∂ L ( t ) ∂ a ( t ) ∂ a ( t ) ∂ H O U T ( t ) ∂ H O U T ( t ) ∂ W (53) {\partial L \over \partial W} = {\partial \sum_t L^{(t)}\over \partial W} = \sum_{t=1}^{T=4}{\partial L^{(t)}\over \partial a^{(t)}}{\partial a^{(t)} \over \partial HOUT^{(t)}}{\partial HOUT^{(t)} \over \partial W} \tag{53} WL=WtL(t)=t=1T=4a(t)L(t)HOUT(t)a(t)WHOUT(t)(53)
现在改成:
∂ L ∂ W = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( ∂ H I N ( t ) ∂ W ) T = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( H O U T ( t − 1 ) ) T (55) {\partial L \over \partial W} = \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}({\partial HIN^{(t)} \over \partial W})^{T} = \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}(HOUT^{(t-1)})^{T} \tag{55} WL=t=1T=4HIN(t)L(WHIN(t))T=t=1T=4HIN(t)L(HOUT(t1))T(55)
这里写转置是借鉴参考资料链接5,这里由于是矩阵必然是转置的。之前式子都是简略的推导并没有在意这个内容。并且因为这里是下一个 L ( t ) L^{(t)} L(t)可能仍旧和上一个 H O U T ( t − 1 ) HOUT^{(t-1)} HOUT(t1)有关,所以不能把 L ( t ) L^{(t)} L(t)分开来求求和。
令其中:
δ ( t ) = ∂ L ∂ H I N ( t ) = ( ∂ H O U T ( t ) ∂ H I N ( t ) ) T ( ∂ L ( t ) ∂ H O U T ( t ) ) (56) \delta^{(t)}={\partial L \over \partial HIN^{(t)}}=({\partial HOUT^{(t)} \over \partial HIN^{(t)}})^{T}({\partial L^{(t)} \over \partial HOUT^{(t)}}) \tag{56} δ(t)=HIN(t)L=(HIN(t)HOUT(t))T(HOUT(t)L(t))(56)
Note: 这里的 δ ( t ) \delta^{(t)} δ(t)就是最终程序里面的d_f
由:
∂ H O U T ( t ) ∂ H I N ( t ) = ( 1 − ( H O U T ( t ) ) 2 ) = t a n h ′ ( H I N ( t ) ) (57) {\partial HOUT^{(t)} \over \partial HIN^{(t)}} = (1 - (HOUT^{(t)})^2) = tanh'(HIN^{(t)}) \tag{57} HIN(t)HOUT(t)=(1(HOUT(t))2)=tanh(HIN(t))(57)
H O U T ( t ) 和 H I N ( T ) 都是大小为 H i d d e n _ s i z e ∗ 1 的矩阵,所以最后相互求导之后应该是一个雅各比矩阵 HOUT^{(t)}和HIN^{(T)}都是大小为Hidden\_size*1的矩阵,所以最后相互求导之后应该是一个雅各比矩阵 HOUT(t)HIN(T)都是大小为Hidden_size1的矩阵,所以最后相互求导之后应该是一个雅各比矩阵

∂ H O U T ( t ) ∂ H I N ( t ) = ( ∂ H O U T 1 ( t ) ∂ H I N 1 ( t ) , ∂ H O U T 1 ( t ) ∂ H I N 2 ( t ) , . . . , ∂ H O U T 1 ( t ) ∂ H I N h i d d e n _ s i z e ( t ) . . . . ∂ H O U T h i d d e n _ s i z e ( t ) ∂ H I N 1 ( t ) , ∂ H O U T h i d d e n _ s i z e ( t ) ∂ H I N 2 ( t ) , . . . , ∂ H O U T h i d d e n _ s i z e ( t ) ∂ H I N h i d d e n _ s i z e ( t ) ) (58) {\partial HOUT^{(t)} \over \partial HIN^{(t)}} =\begin{pmatrix} {\partial HOUT_1^{(t)}\over \partial HIN_1^{(t)}},{\partial HOUT_1^{(t)}\over \partial HIN_2^{(t)}},...,{\partial HOUT_1^{(t)}\over \partial HIN_{hidden\_size}^{(t)}}\\ ....\\ {\partial HOUT_{hidden\_size}^{(t)}\over \partial HIN_1^{(t)}},{\partial HOUT_{hidden\_size}^{(t)}\over \partial HIN_2^{(t)}},...,{\partial HOUT_{hidden\_size}^{(t)}\over \partial HIN_{hidden\_size}^{(t)}}\\ \end{pmatrix}\tag{58} HIN(t)HOUT(t)= HIN1(t)HOUT1(t),HIN2(t)HOUT1(t),...HINhidden_size(t)HOUT1(t)....HIN1(t)HOUThidden_size(t),HIN2(t)HOUThidden_size(t),...HINhidden_size(t)HOUThidden_size(t) (58)

并且由依赖关系,他就是一个对角阵,也就是对角线以外的全是0.
所以:

( ∂ H O U T ( t ) ∂ H I N ( t ) ) T = d i a g ( t a n h ′ ( H I N ( t ) ) ) (59) ({\partial HOUT^{(t)} \over \partial HIN^{(t)}})^{T} = diag(tanh'(HIN^{(t)}))\tag{59} (HIN(t)HOUT(t))T=diag(tanh(HIN(t)))(59)

又由(这块是它能够推下去以及之前那个思路写的不太对的重点原因):
在前向传播时, H O U T ( t ) HOUT^{(t)} HOUT(t)在当前时刻通过 a ( t ) a^{(t)} a(t)影响
,同时 H O U T ( t ) HOUT^{(t)} HOUT(t)也会传给下一时刻去影响L,所以这里的导数由两部分组成,一部分直接来自 a ( t ) a^{(t)} a(t);另一部分来自 H I N ( t + 1 ) HIN^{(t+1)} HIN(t+1),因为 H I N ( t + 1 ) HIN^{(t+1)} HIN(t+1)的计算依赖于 H O U T ( t ) HOUT^{(t)} HOUT(t)。可以看到,梯度不仅来自于当前输出,还来自于下一时刻的输出.——参考资料链接5

所以
( ∂ L ( t ) ∂ H O U T ( t ) ) = ( ∂ a ( t ) ∂ H O U T ( t ) ) T ∂ L ∂ a ( t ) + ( ∂ H I N ( t + 1 ) ∂ H O U T ( t ) ) T ∂ L ∂ H I N ( t + 1 ) (60) ({\partial L^{(t)} \over \partial HOUT^{(t)}})=({{\partial a^{(t)}} \over \partial HOUT^{(t)}})^{T}{{\partial L} \over \partial a^{(t)}}+({{\partial HIN^{(t+1)}} \over \partial HOUT^{(t)}})^{T}{{\partial L}\over \partial HIN^{(t+1)}}\tag{60} (HOUT(t)L(t))=(HOUT(t)a(t))Ta(t)L+(HOUT(t)HIN(t+1))THIN(t+1)L(60)

= V T ( y ^ ( t ) − y ( t ) ) + W T δ ( t + 1 ) = V^{T}(\hat{y}^{(t)}-{y}^{(t)}) + W^{T} \delta^{(t+1)} =VT(y^(t)y(t))+WTδ(t+1)
其中 y ^ ( t ) \hat{y}^{(t)} y^(t)是预测值, y ( t ) {y}^{(t)} y(t)是真实值,前面的结果是由之前1里面求出来的。按照本文的符号即:

= V T ( o ^ ( t ) − p ( t ) ) + W T δ ( t + 1 ) = V^{T}(\hat{o}^{(t)}-{p}^{(t)}) + W^{T} \delta^{(t+1)} =VT(o^(t)p(t))+WTδ(t+1)
可以看出他的递推关系,这样就可以编程了。
综上:
∂ L ∂ W = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( ∂ H I N ( t ) ∂ W ) T = ∑ t = 1 T = 4 ( δ ( t ) ∗ ( H O U T ( t − 1 ) ) ( T ) ) {\partial L \over \partial W}=\sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}({\partial HIN^{(t)} \over \partial W})^{T}=\sum_{t=1}^{T=4} (\delta^{(t)}*(HOUT^{(t-1)})^{(T)}) WL=t=1T=4HIN(t)L(WHIN(t))T=t=1T=4(δ(t)(HOUT(t1))(T))

= ∑ t = 1 T = 4 ( ( d i a g ( t a n h ′ ( H I N ( t ) ) ) ) ∗ ( V T ( o ^ ( t ) − p ( t ) ) + W T δ ( t + 1 ) ) ) ( H O U T ( t − 1 ) ) ( T ) (61) =\sum_{t=1}^{T=4}((diag(tanh'(HIN^{(t)})))*(V^{T}(\hat{o}^{(t)}-{p}^{(t)})+ W^{T} \delta^{(t+1)}))(HOUT^{(t-1)})^{(T)}\tag{61} =t=1T=4((diag(tanh(HIN(t))))(VT(o^(t)p(t))+WTδ(t+1)))(HOUT(t1))(T)(61)

终于,回到这个例子:
这里时间范围命名为1<=t<=4
t=4:
( ( d i a g ( t a n h ′ ( H I N ( 4 ) ) ) ) ∗ ( V T ( o ^ ( 4 ) − p ( 4 ) ) + 0 ) ) ( H O U T ( 4 − 1 ) ) ( T ) ((diag(tanh'(HIN^{(4)})))*(V^{T}(\hat{o}^{(4)}-{p}^{(4)}) + 0))(HOUT^{(4-1)})^{(T)} ((diag(tanh(HIN(4))))(VT(o^(4)p(4))+0))(HOUT(41))(T)
= ( ( d i a g ( t a n h ′ ( 1 − ( H O U T ( 4 ) ) 2 ) ) ) ∗ ( ( [ 0.1 , 0.3 , 0.2 ] [ 0.5 , 0.8 , 0.2 ] ) T ( [ 0.33 − 1 , 0.67 ] ) + 0 ) ( ( [ 0.1 ] [ 0.2 ] [ 0.3 ] ) ) ( T ) (62) =((diag(tanh'(1 - (HOUT^{(4)})^{2})))* (\begin{pmatrix} [0.1,0.3,0.2]\\ [0.5,0.8,0.2]\\ \end{pmatrix}^{T} \begin{pmatrix}[0.33-1,0.67]\\\end{pmatrix} +0) (\begin{pmatrix} [0.1]\\ [0.2]\\ [0.3]\\ \end{pmatrix})^{(T)}\tag{62} =((diag(tanh(1(HOUT(4))2)))(([0.1,0.3,0.2][0.5,0.8,0.2])T([0.331,0.67])+0)( [0.1][0.2][0.3] )(T)(62)

这里没有 δ ( t + 1 ) \delta^{(t+1)} δ(t+1)是因为 O 4 O^{4} O4没有下一个 L 5 L^{5} L5可以影响了。这里具体的值就不计算出来了。
以此类推:
t=3:
( ( d i a g ( t a n h ′ ( H I N ( 3 ) ) ) ) ∗ ( V T ( o ^ ( 3 ) − p ( 3 ) ) + W T δ ( 3 + 1 ) ) ) ( H O U T ( 3 − 1 ) ) ( T ) (63) ((diag(tanh'(HIN^{(3)})))*(V^{T}(\hat{o}^{(3)}-{p}^{(3)}) + W^{T} \delta^{(3+1)}))(HOUT^{(3-1)})^{(T)}\tag{63} ((diag(tanh(HIN(3))))(VT(o^(3)p(3))+WTδ(3+1)))(HOUT(31))(T)(63)
t=2:
( ( d i a g ( t a n h ′ ( H I N ( 2 ) ) ) ) ∗ ( V T ( o ^ ( 2 ) − p ( 2 ) ) + W T δ ( 2 + 1 ) ) ) ( H O U T ( 2 − 1 ) ) ( T ) (64) ((diag(tanh'(HIN^{(2)})))*(V^{T}(\hat{o}^{(2)}-{p}^{(2)}) + W^{T} \delta^{(2+1)}))(HOUT^{(2-1)})^{(T)}\tag{64} ((diag(tanh(HIN(2))))(VT(o^(2)p(2))+WTδ(2+1)))(HOUT(21))(T)(64)
t=1:
( ( d i a g ( t a n h ′ ( H I N ( 1 ) ) ) ) ∗ ( V T ( o ^ ( 1 ) − p ( 1 ) ) + W T δ ( 1 + 1 ) ) ) ( H O U T ( 0 ) ) ( T ) (65) ((diag(tanh'(HIN^{(1)})))*(V^{T}(\hat{o}^{(1)}-{p}^{(1)}) + W^{T} \delta^{(1+1)}))(HOUT^{(0)})^{(T)}\tag{65} ((diag(tanh(HIN(1))))(VT(o^(1)p(1))+WTδ(1+1)))(HOUT(0))(T)(65)
这里的 ( H O U T ( 0 ) ) ( T ) (HOUT^{(0)})^{(T)} (HOUT(0))(T)就是初始态,一般可以设为[0].

3.尝试求 ∂ L ∂ U \partial L \over \partial U UL

与求W的偏导类似,思路为:
∂ L ∂ U = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( ∂ H I N ( t ) ∂ U ) T = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( X ( t − 1 ) ) T (66) {\partial L \over \partial U} = \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}({\partial HIN^{(t)} \over \partial U})^{T} = \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}(X^{(t-1)})^{T} \tag{66} UL=t=1T=4HIN(t)L(UHIN(t))T=t=1T=4HIN(t)L(X(t1))T(66)
由(56)
δ ( t ) = ∂ L ∂ H I N ( t ) = ( ∂ H O U T ( t ) ∂ H I N ( t ) ) T ( ∂ L ( t ) ∂ H O U T ( t ) ) (56) \delta^{(t)} = {\partial L \over \partial HIN^{(t)}}=({\partial HOUT^{(t)} \over \partial HIN^{(t)}})^{T}({\partial L^{(t)} \over \partial HOUT^{(t)}}) \tag{56} δ(t)=HIN(t)L=(HIN(t)HOUT(t))T(HOUT(t)L(t))(56)
于是(66)变成:
∂ L ∂ U = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ( ∂ H I N ( t ) ∂ U ) T = ∑ t = 1 T = 4 δ ( t ) ( X ( t − 1 ) ) T (67) {\partial L \over \partial U} = \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}({\partial HIN^{(t)} \over \partial U})^{T} = \sum_{t=1}^{T=4}\delta^{(t)}(X^{(t-1)})^{T} \tag{67} UL=t=1T=4HIN(t)L(UHIN(t))T=t=1T=4δ(t)(X(t1))T(67)
= ∑ t = 1 T = 4 ( ( d i a g ( t a n h ′ ( H I N ( t ) ) ) ) ∗ ( V T ( o ^ ( t ) − p ( t ) ) + W T δ ( t + 1 ) ) ) ( x ( t − 1 ) ) ( T ) (68) =\sum_{t=1}^{T=4}((diag(tanh'(HIN^{(t)})))*(V^{T}(\hat{o}^{(t)}-{p}^{(t)}) + W^{T} \delta^{(t+1)}))(x^{(t-1)})^{(T)}\tag{68} =t=1T=4((diag(tanh(HIN(t))))(VT(o^(t)p(t))+WTδ(t+1)))(x(t1))(T)(68)
这里的计算与(62)到(65)是类似的,就不再写开来了。

4.尝试求 ∂ L ∂ b \partial L \over \partial b bL

在上述的式子里面都没有提到b,偏置。如果将之前的定义式子改成:
这里的b都是和U,V,W一样是常矩阵,不随t改变,一次梯度下降回传更新一次。
H O U T ( T ) = f ( U x ( t ) + W s ( t − 1 ) + b _ h i d d e n ) HOUT^{(T)}=f(Ux^{(t)}+Ws^{(t-1)}+b\_hidden) HOUT(T)=f(Ux(t)+Ws(t1)+b_hidden)
o ( t ) = S o f t m a x ( a ( t ) ) = S o f t m a x ( V ∗ H O U T ( t ) + b _ o u t ) o^{(t)}= Softmax(a^{(t)}) = Softmax(V * HOUT^{(t)}+b\_out) o(t)=Softmax(a(t))=Softmax(VHOUT(t)+b_out)
也就是这里
H i n ( t ) = U x ( t ) + W s ( t − 1 ) + b _ h i d d e n Hin^{(t)} = Ux^{(t)}+Ws^{(t-1)}+b\_hidden Hin(t)=Ux(t)+Ws(t1)+b_hidden
a ( t ) = V ∗ H O U T ( t ) + b _ o u t a^{(t)}=V * HOUT^{(t)}+b\_out a(t)=VHOUT(t)+b_out
相较于(4.1)式等发生了改变。
那么我需要求更多:
3.1
∂ L ∂ b _ o u t = ∂ L ∂ a ( t ) ∂ a ( t ) ∂ b _ o u t ( t ) = ∂ L ∂ a ( t ) ∗ 1 (69) {\partial L \over \partial b\_out}={\partial L \over \partial a^{(t)}}{\partial a^{(t)} \over \partial {b\_out}^{(t)}}={\partial L \over \partial a^{(t)}}*1 \tag{69} b_outL=a(t)Lb_out(t)a(t)=a(t)L1(69)
前项与之前的(38)求出来的结论是一样的。
3.2
∂ L ∂ b _ h i d d e n = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ∂ H I N ( t ) ∂ b _ h i d d e n ( t ) = ∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) ∗ 1 (70) {\partial L \over \partial b\_hidden}=\sum_{t=1}^{T=4}{\partial L \over \partial HIN^{(t)}}{\partial HIN^{(t)} \over \partial {b\_hidden}^{(t)}}=\sum_{t=1}^{T=4}{\partial L \over \partial HIN^{(t)}}*1 \tag{70} b_hiddenL=t=1T=4HIN(t)Lb_hidden(t)HIN(t)=t=1T=4HIN(t)L1(70)
与(61)式类似,即:
∑ t = 1 T = 4 ∂ L ∂ H I N ( t ) = ∑ t = 1 T = 4 δ ( t ) \sum_{t=1}^{T=4}{\partial L\over \partial HIN^{(t)}}=\sum_{t=1}^{T=4}\delta^{(t)} t=1T=4HIN(t)L=t=1T=4δ(t)
也就是程序中的:

# Backpropagate through non-linearity
d_f = tanh(hidden_states[t], derivative=True) * d_h
d_b_hidden += d_f

5.求得了所有的梯度,开始下降$

这里只需要更新公式:
V < = V − λ ∂ L ∂ V (71) V <= V - \lambda {\partial L \over \partial V} \tag{71} V<=VλVL(71)
W < = W − λ ∂ L ∂ W (72) W <= W - \lambda {\partial L \over \partial W} \tag{72} W<=WλWL(72)
U < = U − λ ∂ L ∂ U (73) U <= U - \lambda {\partial L \over \partial U} \tag{73} U<=UλUL(73)
b _ h i d d e n < = b _ h i d d e n − λ ∂ L ∂ b _ h i d d e n (74) b\_hidden <= b\_hidden - \lambda {\partial L \over \partial b\_hidden} \tag{74} b_hidden<=b_hiddenλb_hiddenL(74)
b _ o u t < = b _ h i d d e n − λ ∂ L ∂ b _ o u t (75) b\_out <= b\_hidden - \lambda {\partial L \over \partial b\_out} \tag{75} b_out<=b_hiddenλb_outL(75)

6.更加深入

我们总提到神经网络加速,gpu等等,但是对于现在手写的神经网络显然不会调用gpu的,同时对于我这样一个机器学习小白来说,我更不能理解同样的原理怎么哪些优化完的库里面与我做的有什么不同能够用到gpu。从FPGA的角度来说,加速就是并行计算。但是往往在这些手写的神经网络里面(网上也可以找到大把的numpy手写全连接神经网络这种内容),并没有任何相关的思路。
直到在查找资料的时候找到了参考资料链接3【RNN前向传播、反向传播与并行计算(非常详细)】
如下:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里面浅浅讲述了他是如何并行计算的,虽然还不是特别能理解,但是简而言之我可以知道的是,这些神经网络的成熟库的实际底层要比我想想的复杂的多,即他的所有神经网络的结构都是与这些基本原理是不同的。还需要更多的学习才能理解。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值