pytorch 梯度反传详细说明:
import torch
w = torch.tensor([1.], requires_grad=True)
x = torch.tensor([2.], requires_grad=True)
a = torch.add(w, x)
b = torch.add(w, 1)
y = torch.mul(a, b)
print(w)
print(x)
print(a)
print(b)
print(y)
y.backward(retain_graph=True)
print(w.grad)
y.backward()
print(w.grad)
结果是:
import torch
w = torch.tensor([1.], requires_grad=True)
x = torch.tensor([2.], requires_grad=True)
a = torch.add(w, x)
b = torch.add(w, 1)
y0 = torch.mul(a, b)
y1 = torch.add(a, b)
loss = torch.cat([y0, y1], dim=0)
grad_tensors = torch.tensor([1., 1.])
loss.backward(gradient=grad_tensors) # gradient 传入torch.autograd.backward()中的grad_tensors
print(w.grad)
结果:
转载自:https://blog.csdn.net/weixin_42154841/article/details/108503148