方法一:
from itertools import chain
optim.Adam(params=chain(backbone_net.parameters(),
linear_rot_net.parameters(),
linear_classify_net.parameters()),
作者:Sail
链接:https://www.zhihu.com/question/322192259/answer/955313940
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
方法二:
作者:xaipxan
链接:https://www.zhihu.com/question/322192259/answer/749589285
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
x = torch.rand(2, 3)
net1 = nn.Linear(3, 3)
net2 = nn.Linear(3, 3)
a = net1(x)
b = net2(a)
tgt = torch.rand(2, 3)
loss_fun = torch.nn.MSELoss()
opt1 = torch.optim.Adam(net1.parameters(), 0.002)
opt2 = torch.optim.Adam(net2.parameters(), 0.002)
for i in range(100):
tmp = net1(x)
output = net2(tmp)
loss = loss_fun(output, tgt)
net1.zero_grad()
net2.zero_grad()
loss.backward()
opt1.step()
opt2.step()
print('EPOCH:{},loss={}'.format(i, loss))
aa = net1(x)
bb = net2(aa)
print(a)
print(aa)
print(b)
print(bb)
这两个方法我都做了实验,是可以使用的
参考:https://www.zhihu.com/question/322192259/answer/669073895