softmax 反向传播 代码 python 实现

概念

反向传播求导

可以看到,softmax 计算了多个神经元的输入,在反向传播求导时,需要考虑对不同神经元的参数求导。
分两种情况考虑:

  • 当求导的参数位于分子时
  • 当求导的参数位于分母时
    p i = s o f t m a x ( z i ) = e z i ∑ j = 1 N e z j = e z 1 e z 1 + e z 2 + e z 3 p_i=softmax(z_i)=\frac{e^{z_i}}{\sum_{j=1}^Ne^{z_j}}=\frac{e^{z_1}}{e^{z_1}+e^{z_2}+e^{z_3}} pi=softmax(zi)=j=1Nezjezi=ez1+ez2+ez3ez1
    当求导的参数位于分子时:

∂ ( e z i ∑ j = 1 N e z j ) ∂ ( e z i ) = ∂ ( e z 1 e z 1 + e z 2 + e z 3 ) ∂ ( e z i ) = e z 1 ( e z 1 + e z 2 + e z 3 ) − e z 1 ⋅ e z 1 ( e z 1 + e z 2 + e z 3 ) 2 = p i − p i 2 = p i ( 1 − p i ) \begin{array}{c} \frac{\partial(\frac{ e^{z_i}}{\sum_{j = 1}^Ne^{z_j}}) }{\partial(e^{z_i}) } = \frac{\partial(\frac{e^{z_1}}{e^{z_1}+e^{z_2}+e^{z_3}})}{\partial(e^{z_i})} \\ = \frac{e^{z_1}(e^{z_1}+e^{z_2}+e^{z_3})-e^{z_1}·e^{z_1}}{(e^{z_1}+e^{z_2}+e^{z_3})^2} \\ =p_i-p_i^2 \\ =p_i(1-p_i) \end{array} (ezi)(j=1Nezjezi)=(ezi)(ez1+ez2+ez3ez1)=(ez1+ez2+ez3)2ez1(ez1+ez2+ez3)ez1ez1=pipi2=pi(1pi)
当求导的参数位于分母时( e z 2 e^{z_2} ez2 or e z 3 e^{z_3} ez3 这两个是对称的,求导结果是一样的):
∂ ( e z i ∑ j = 1 N e z j ) ∂ ( e z j ) = ∂ ( e z 1 e z 1 + e z 2 + e z 3 ) ∂ ( e z i ) = 0 − e z 1 ⋅ e z 2 ( e z 1 + e z 2 + e z 3 ) 2 = − p j ⋅ p i \begin{array}{c} \frac{ \partial( \frac{ e^{z_i}}{\sum_{j = 1}^Ne^{z_j}}) }{\partial(e^{z_j}) } = \frac{\partial(\frac{e^{z_1}}{e^{z_1}+e^{z_2}+e^{z_3}})}{\partial(e^{z_i})} \\ = \frac{0-e^{z_1}·e^{z_2}}{(e^{z_1}+e^{z_2}+e^{z_3})^2} \\ =-p_j·p_i \\ \end{array} (ezj)(j=1Nezjezi)=(ezi)(ez1+ez2+ez3ez1)=(ez1+ez2+ez3)20ez1ez2=pjpi
![[attachments/softmax_求导结果.png]]

代码

import torch
import math

def my_softmax(features):
    _sum = 0
    for i in features:
        _sum += math.e ** i
    return torch.Tensor([ math.e ** i / _sum for i in features ])

def my_softmax_grad(outputs):    
    n = len(outputs)
    grad = []
    for i in range(n):
        temp = []
        for j in range(n):
            if i == j:
                temp.append(outputs[i] * (1- outputs[i]))
            else:
                temp.append(-outputs[j] * outputs[i])
        grad.append(torch.Tensor(temp))
    return grad

if __name__ == '__main__':

    features = torch.randn(10)
    features.requires_grad_()

    torch_softmax = torch.nn.functional.softmax
    p1 = torch_softmax(features,dim=0)
    p2 = my_softmax(features)
    print(torch.allclose(p1,p2))
    
    n = len(p1)
    p2_grad = my_softmax_grad(p2)
    for i in range(n):
        p1_grad = torch.autograd.grad(p1[i],features, retain_graph=True)
        print(torch.allclose(p1_grad[0], p2_grad[i]))
        

相关资料

https://zhuanlan.zhihu.com/p/105722023

https://www.cnblogs.com/gzyatcnblogs/articles/15937870.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值