LOSS及其梯度
. Mean Squared Error
. Cross Entropy Loss
. binary
. multi-class
. +softmax
. leave it to Logistic Regression Part
autograd.grad:
import torch
from torch.nn import functional as F
# pred = wx + b
x = torch.ones(1)
w = torch.full([1], 2)
mse = F.mse_loss(torch.ones(1), x*w)
# 更新w的值
z = w.requires_grad_()
mse = F.mse_loss(torch.ones(1), x*w)
c = torch.autograd.grad(mse, [w])
print("c:\t", c)
c: (tensor([2.]),)
Loss.backward:
import torch
from torch.nn import functional as F
# pred = wx + b
x = torch.ones(1)
w = torch.full([1], 2)
mse = F.mse_loss(torch.ones(1), x*w)
# 更新w的值
z = w.requires_grad_()
# 动态图的建图
mse = F.mse_loss(torch.ones(1), x*w)
# backward() 将自动计算各个节点的梯度,这是一个反向传播的过程
mse.backward()
print("w.grad:\t", w.grad) # 此处只有一个变量w
w.grad: tensor([2.])
Gradient API
. torch.autograd.gard(loss, [w1, w2,...])
.[w1 grad, w2 gard...]
. loss.backward()
.w1.gard
.w2.grad
F.softmax:
import torch
from torch.nn import functional as F
a = torch.rand(3)
print("a:\t", a)
# 修改Tensor的requires_grad属性
b = a.requires_grad_()
print("b:\t", b)
p = F.softmax(b, dim=0)
c = torch.autograd.grad(p[1], [a], retain_graph=True)
print("c:\t", c)
d = torch.autograd.grad(p[2], a)
print("d:\t", d)
a: tensor([0.3711, 0.5151, 0.8666])
b: tensor([0.3711, 0.5151, 0.8666], requires_grad=True)
c: (tensor([-0.0801, 0.2117, -0.1315]),)
d: (tensor([-0.1139, -0.1315, 0.2454]),)