链接:https://www.zhihu.com/question/398425328/answer/1454276131
题主已经标明了是Torch,那么没区别,backward只是积累梯度而已,哪怕同一个loss,你backward两次,就是积累两倍梯度。要使用optimizer.step之后才会根据积累的梯度更新权重。至于梯度是怎么积累的,optimizer不会管。虽然其实和题目无关,但前面的答主们很嗨的研究多任务优化问题,还是很赞的。。。跑下面的程序,打印出来的梯度是一样的:import torch
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
input = torch.rand((1, 64)).float()
ground_truth1 = torch.tensor([[0]]).float()
ground_truth2 = torch.tensor([[0]]).float()
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer1 = nn.Linear(64, 8, bias=False)
self.layer2 = nn.Linear(8, 1, bias=False)
def forward(self, x):
out1 = self.layer1(x)
out2 = self.layer2(out1)
return F.sigmoid(out2), F.sigmoid(out2)
model = MyModel()
model.train()
optimizer = optim.SGD(model.parameters(), 0.01)
output1, output2 = model(input)
loss1 = F.mse_loss(output1, ground_truth1)
loss2 = F.mse_loss(output2, ground_truth2)
optimizer.zero_grad()
loss1.backward(retain_graph=True)
loss2.backward(retain_graph=True)
print(model.layer1.weight.grad[0, 0])
optimizer.zero_grad()
loss = (loss1 + loss2)
loss.backward()
print(model.layer1.weight.grad[0, 0])