记得每一次迭代后都要将梯度清零
optimizer.zero_grad()
1、反向传播波计算
z.backward(retain_graph=True) #如果不清空,b.grad梯度会累加起来
w.grad
b.grad
2、训练模型
for epoch in range(epochs):
epoch += 1
#注意转行成tensor
inputs = torch.from_numpy(x_train)
labels = torch.from_numpy(y_train)
#每一次迭代,梯度要清零
optimizer.zero_grad()
#前向传播
outputs = model(inputs)
#反向传播
loss = criterion(outputs, labels)
#更新权重参数
optimizer.step()
if epoch % 50 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))
3、线性回归模型(一个自己写前向传播的例子)
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel,self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self,x):
out = self.linear(x)
return out
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
不管你想设计多复杂的模型,只要把forward前向传播写好,反向传播在pytorch里面又模块可以调用。