一、Loss及其梯度
一、常见的Loss函数类型
- Mean squared error(均方差,mse)
- 基本形式
- l o s s = ∑ [ y − f θ ( x ) ] 2 loss=\sum[y-f_\theta(x)]^2 loss=∑[y−fθ(x)]2
- L 2 − n o r m = ∣ ∣ y − f θ ( x ) ∣ ∣ 2 L2-norm=||y-f_\theta(x)||_2 L2−norm=∣∣y−fθ(x)∣∣2
- l o s s = n o r m ( y − f θ ( x ) ) 2 loss=norm(y-f_\theta(x))^2 loss=norm(y−fθ(x))2
- mse求导
▽ l o s s ▽ θ = 2 ∗ ∑ [ y − f θ ( x ) ] ∗ ▽ f θ ( x ) ▽ θ \frac{\triangledown loss}{\triangledown \theta}=2*\sum{\left[ y-f_{\theta}\left( x \right) \right]}*\frac{\triangledown f_{\theta}\left( x \right)}{\triangledown \theta} ▽θ▽loss=2∗∑[y−fθ(x)]∗▽θ▽fθ(x)
- 基本形式
- Cross Entropy Loss(交叉熵)
- 可用于二分类(binary)和多分类(multi-class)问题
- 使用时常与softmax搭配使用
二、使用Pytorch自动求导
1、autograd.grad函数使用
torch.autograd.grad(mse,[w])
mse
:给出的值[w]
:自变量,切记requires_grad=True
。- 可以使用
w.requires_grad_(True)
修改。 - 可以在生成的时候设置
requires_grad=True
。
- 可以使用
- 相关代码
# 以f(x) = xw+b 为例
# x初始值为1,w为dim=1、值为2的tensor,b假设为0
#----------------------------------------------------------------------------------
x = torch.ones(1)
w = torch.full([1],2)
mse = F.mse_loss(torch.ones(1),x*w) # 第一个参数给的是pred值,第二个参数给的是label值
w.requires_grad_(True) #告诉pytorch需要梯度信息
mse=F.mse_loss(torch.ones(1),x*w) #静态图需要更新信息
#----------------------------------------------------------------------------------
print("mse = ",mse)
print("grad = ",torch.autograd.grad(mse,[w]))
# 输出: mse = tensor(1.)
# 输出:grad = (tensor([2.]),)
2、loss.backward函数使用
F.mse_loss(pred,label)
:求解出mse的值pred
:是预测值label
:是真实计算出的参数
w.backward()
:反向传播求出梯度- 相关代码
# 以f(x) = xw+b 为例
# x初始值为1,w为dim=1、值为2的tensor,b假设为0
#----------------------------------------------------------------------------------
x = torch.ones(1)
w = torch.full([1],2,requires_grad=True)
mse = F.mse_loss(torch.ones(1),x*w)
mse.backward()
grad = w.grad
#----------------------------------------------------------------------------------
print("mse = ",mse)
print("grad = ",grad)
# mse = tensor(1., grad_fn=<MeanBackward0>)
# grad = tensor([2.])
三、Softmax激活函数
1、soft version of max简介
- 在深度学习中,softmax在多分类的场景中使用广泛。他讲一些输入映射为[0-1]之间的实数,并且归一化保证和为1。
2、Softmax定义:
- 函数:
S ( y i ) = e y i ∑ j e y j S\left( y_i \right) =\frac{e^{y_i}}{\sum_j^{}{e^{yj}}} S(yi)=∑jeyjeyi - 演示
3、Softmax求导:
-
当
i==j
时
p i = ∂ e a i ∑ k = 1 N e a k p_i=\partial \frac{e^{a_i}}{\sum\nolimits_{k=1}^N{e^{a_k}}} pi=∂∑k=1Neakeai
∂ p j ∂ a j = ∂ e a i ∑ k = 1 N e a k ∂ a j = e a i ( ∑ k = 1 N e a k − e a j ) ( ∑ k = 1 N e a k ) 2 = e a j ∑ k = 1 N e a k × ( ∑ k = 1 N e a k − e a j ) ∑ k = 1 N e a k = p i ( 1 − p j ) \frac{\partial p_j}{\partial a_j}=\frac{\partial \frac{e^{a_i}}{\sum\nolimits_{k=1}^N{e^{a_k}}}}{\partial a_j}=\frac{e^{a_i}\left( \sum\nolimits_{k=1}^N{e^{a_k}}-e^{a_j} \right)}{\left( \sum\nolimits_{k=1}^N{e^{a_k}} \right) ^2}=\frac{e^{a_j}}{\sum\nolimits_{k=1}^N{e^{a_k}}}\times \frac{\left( \sum\nolimits_{k=1}^N{e^{a_k}}-e^{a_j} \right)}{\sum\nolimits_{k=1}^N{e^{a_k}}}=p_i\left( 1-p_j \right) ∂aj∂pj=∂aj∂∑k=1Neakeai=(∑k=1Neak)2eai(∑k=1Neak−eaj)=∑k=1Neakeaj×∑k=1Neak(∑k=1Neak−eaj)=pi(1−pj) -
当
i!=j
时
p i = ∂ e a i ∑ k = 1 N e a k p_i=\partial \frac{e^{a_i}}{\sum\nolimits_{k=1}^N{e^{a_k}}} pi=∂∑k=1Neakeai
∂ p j ∂ a j = ∂ e a i ∑ k = 1 N e a k ∂ a j = 0 − e a j e a i ( ∑ k = 1 N e a k ) 2 = − e a j ∑ k = 1 N e a k × e a i ∑ k = 1 N e a k = − p j ⋅ p i \frac{\partial p_j}{\partial a_j}=\frac{\partial \frac{e^{a_i}}{\sum\nolimits_{k=1}^N{e^{a_k}}}}{\partial a_j}=\frac{0-e^{a_j}e^{a_i}}{\left( \sum\nolimits_{k=1}^N{e^{a_k}} \right) ^2}=\frac{-e^{a_j}}{\sum\nolimits_{k=1}^N{e^{a_k}}}\times \frac{e^{a_i}}{\sum\nolimits_{k=1}^N{e^{a_k}}}=-p_j·p_i ∂aj∂pj=∂aj∂∑k=1Neakeai=(∑k=1Neak)20−eajeai=∑k=1Neak−eaj×∑k=1Neakeai=−pj⋅pi -
公式合并
∂ p j ∂ a j = { p i ( 1 − p j ) i f i = j − p j ⋅ p i i f i ≠ j \frac{\partial p_j}{\partial a_j}=\begin{cases} p_i\left( 1-p_j \right) \,\, if\,\,i=j\\ -p_j·p_i\,\, \,\, \,\, \,\, \,\, \,\, if\,\,i\ne j\\ \end{cases} ∂aj∂pj={pi(1−pj)ifi=j−pj⋅piifi=j
- 使用Kronecker delta
δ i j = { 1 i f i = j 0 i f i ≠ j \delta ij=\begin{cases} 1 \,\, \,\,if\,\,i=j\\ 0 \,\, \,\,if\,\,i\ne j\\ \end{cases} δij={1ifi=j0ifi=j
∂ p i ∂ a j = p i ( δ i j − p j ) \frac{\partial p_i}{\partial a_j}=p_i\left( \delta _{ij}-p_j \right) ∂aj∂pi=pi(δij−pj)
- 使用Kronecker delta
4、Softmax函数及其求导:
a = torch.rand(3)
a.requires_grad_(True)
p = F.softmax(a,dim=0) #会自动完成建图操作
grad0 = torch.autograd.grad(p[0],[a],retain_graph=True) #必须是dim为1的
grad1 = torch.autograd.grad(p[1],[a],retain_graph=True)
grad2 = torch.autograd.grad(p[2],[a],retain_graph=True)
print("a = ",a)
print("p = ",p)
print("grad0 = ",grad0)
print("grad1 = ",grad1)
print("grad2 = ",grad2)
#-------------------------------------------------------------------------------
# a = tensor([0.2260, 0.5295, 0.0540], requires_grad=True)
# p = tensor([0.3128, 0.4238, 0.2634], grad_fn=<SoftmaxBackward>)
# grad0 = (tensor([ 0.2150, -0.1326, -0.0824]),)
# grad1 = (tensor([-0.1326, 0.2442, -0.1116]),)
# grad2 = (tensor([-0.0824, -0.1116, 0.1940]),)