Neural Networks and Deep Learning 2

本文深入探讨了反向传播算法的工作原理,通过四个基本方程详细解释了其背后的数学逻辑,并提供了两种不同表示的证明。对于熟悉矩阵乘法的读者,这种基于常规矩阵乘法的解释可能更加直观易懂。此外,文章还讨论了如何在包含线性神经元的网络中以及在小批量数据上应用反向传播算法,强调了矩阵运算在提高计算效率方面的重要性。
摘要由CSDN通过智能技术生成

本章我做的矩阵乘法的地方与这本书使用相同,不过我所有向量都是列向量

Ch02 How the backpropagation algorithm works

在线书籍http://neuralnetworksanddeeplearning.com/chap2.html

目录

The four fundamental equations behind backpropagation
  • Alternate presentation of the equations of backpropagation: I’ve stated the equations of backpropagation (notably (BP11) and (BP22)) using the Hadamard product. This presentation may be disconcerting if you’re unused to the Hadamard product. There’s an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) may be rewritten as
    δ L = Σ ′ ( z L ) ∇ a C , \delta ^L = \Sigma'(z^L)\nabla_a C, δL=Σ(zL)aC,
    where Σ ′ ( z L ) \Sigma'(z^L) Σ(zL) is a square matrix whose diagonal entries are the values σ ′ ( z j L ) \sigma'(z_j^L) σ(zjL), and whose off-diagonal entries are zero. Note that this matrix acts on ∇ a C \nabla_a C aC by conventional matrix multiplication. (2) Show that (BP2) may be rewritten as
    δ l = Σ ′ ( z l ) ( w l + 1 ) T δ l + 1 \delta^l=\Sigma'(z^l)(w^{l+1})^T\delta^{l+1} δl=Σ(zl)(wl+1)Tδl+1
    (3) By combining observations (1) and (2) show that
    δ l = Σ ′ ( z l ) ( w l + 1 ) T . . . Σ ′ ( z L − 1 ) ( w L ) T Σ ′ ( z L ) ∇ a C \delta^l = \Sigma'(z^l)(w^{l+1})^T ... \Sigma'(z^{L-1})(w^{L})^T\Sigma'(z^L)\nabla_aC δl=Σ(zl)(wl+1)T...Σ(zL1)(wL)TΣ(zL)aC
    For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) and (BP2). The reason I’ve focused on (BP1) and (BP2) is because that approach turns out to be faster to implement numerically.
    证明:(BP1)
    Expected node of symbol group type, but got node of type cr
    δ L = ( ∂ C ∂ a 1 L σ ′ ( z 1 L ) , ∂ C ∂ a 2 L σ ′ ( z 2 L ) , . . . , ∂ C ∂ a k L σ ′ ( z k L ) ) T \delta^L = \left(\frac{\partial C}{\partial a_1^L}\sigma '(z_1^L), \frac{\partial C}{\partial a_2^L}\sigma '(z_2^L), ... ,\frac{\partial C}{\partial a_k^L}\sigma '(z_k^L)\right)^T δL=(a1LCσ(z1L),a2LCσ(z2L),...,akLCσ(zkL))T
    (BP2)
    δ l = ( ( w l + 1 ) T δ l + 1 ) ⊙ σ ′ ( z l ) \delta^l = \left(\left(w^{l+1}\right)^T\delta^{l+1}\right)\odot \sigma'(z^l) δl=((wl+1)Tδl+1)σ(zl)
    $=\left[\begin{array}\
    w_{11}^{l+1} & w_{12}^{l+1} & … & w_{1k}^{l+1} \
    w_{21}^{l+1} & w_{22}^{l+1} & … & w_{2k}^{l+1} \
    … & … & …& … \
    w_{j1}^{l+1} & w_{j2}^{l+1} & … & w_{jk}^{l+1} \\
    \end{array}\right]^T
    \cdot \left[\begin{array}\
    \delta_1^{l+1} \
    \delta_2^{l+1} \
    … \
    \delta_j^{l+1}
    \end{array}\right] \odot\left[\begin{array}\
    \sigma’(z_1^l) \
    \sigma’(z_2^l) \
    … \
    \sigma’(z_k^l)
    \end{array}\right] $
    $=\left[\begin{array}\
    \sum_j (w_{j1}{l+1}\delta_j{l+1})\cdot \sigma’(z_1^l) \
    \sum_j (w_{j2}{l+1}\delta_j{l+1})\cdot \sigma’(z_2^l) \
    … \
    \sum_j (w_{jk}{l+1}\delta_j{l+1})\cdot \sigma’(z_k^l)
    \end{array}\right] $
    Σ ′ ( z l ) ( w l + 1 ) T δ l + 1 \Sigma'(z^l)(w^{l+1})^T\delta^{l+1} Σ(zl)(wl+1)Tδl+1
    $ = \left[\begin{array} \
    \sigma’(z_1^L) & 0 & … & 0 \
    0 & \sigma’(z_2^L) & … & 0 \
    … & … & … & …\
    0 & … & … & \sigma’(z_k^L)
    \end{array}\right] \cdot \left[\begin{array}\
    w_{11}^{l+1} & w_{12}^{l+1} & … & w_{1k}^{l+1} \
    w_{21}^{l+1} & w_{22}^{l+1} & … & w_{2k}^{l+1} \
    … & … & …& … \
    w_{j1}^{l+1} & w_{j2}^{l+1} & … & w_{jk}^{l+1} \\
    \end{array}\right]^T\cdot \left[\begin{array}\
    \delta_1^{l+1} \
    \delta_2^{l+1} \
    … \
    \delta_j^{l+1}
    \end{array}\right]$
    $ = \left[\begin{array}\
    \sigma’(z_1^l) \cdot \sum_j (w_{j1}^{l+1} \delta_j^{l+1}) \
    \sigma’(z_2^l) \cdot \sum_j (w_{j2}^{l+1} \delta_j^{l+1}) \
    … \
    \sigma’(z_k^l) \cdot \sum_j (w_{jk}^{l+1} \delta_j^{l+1})
    \end{array}\right] = \delta^l$
    其中第 l l l层有 k k k个神经元,第 l + 1 l+1 l+1层有 j j j个神经元, w j k l + 1 w_{jk}^{l+1} wjkl+1表示第 l l l层第 k k k个神经元对第 l + 1 l+1 l+1层第 j j j个神经元的作用。
    δ l = Σ ′ ( z l ) ( w l + 1 ) T . . . Σ ′ ( z L − 1 ) ( w L ) T Σ ′ ( z L ) ∇ a C \delta^l = \Sigma'(z^l)(w^{l+1})^T ... \Sigma'(z^{L-1})(w^{L})^T\Sigma'(z^L)\nabla_aC δl=Σ(zl)(wl+1)T...Σ(zL1)(wL)TΣ(zL)aC
    δ l + 1 \delta^{l+1} δl+1迭代为 δ l + 1 = Σ ′ ( z l + 1 ) ( w l + 2 ) T δ l + 2 \delta^{l+1}=\Sigma'(z^{l+1})(w^{l+2})^T\delta^{l+2} δl+1=Σ(zl+1)(wl+2)Tδl+2代入即可。
Proof of the four fundamental equations (optional)
  • Prove Equations (BP33) and (BP44).
    证明:(BP3)由链式求导
    ∂ C ∂ b j l = ∂ C ∂ a j l ∂ a j l ∂ z j l ∂ z j l ∂ b j l \frac{\partial C}{\partial b_j^l} = \frac{\partial C}{\partial a_j^l}\frac{\partial a_j^l}{\partial z_j^l}\frac{\partial z_j^l}{\partial b_j^l} bjlC=ajlCzjlajlbjlzjl
    由于 ∂ C ∂ a j l ∂ a j l ∂ z j l = ∂ C ∂ a j l σ ′ ( z j l ) = δ j l \frac{\partial C}{\partial a_j^l}\frac{\partial a_j^l}{\partial z_j^l}=\frac{\partial C}{\partial a_j^l}\sigma '(z_j^l) = \delta_j^l ajlCzjlajl=ajlCσ(zj
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值