代码模板
import torch
# 步骤一:Prepare dataset
# x,y是矩阵,3行1列 也就是说总共有3个数据,每个数据只有1个特征
x_data = torch.tensor([[1.0], [2.0], [3.0]])
y_data = torch.tensor([[2.0], [4.0], [6.0]])
# 步骤二:Design model using class (inherit from nn.Module)
class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
# (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的
# 该线性层需要学习的参数是w和b 获取w/b的方式分别是~linear.weight/linear.bias
self.linear = torch.nn.Linear(1, 1) #设置w、b的初始值分别为1、1
def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearModel()
# 步骤三:Construct loss and optimizer (using Pytorch API)
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # model.parameters()自动完成参数的初始化操作
# 步骤四:Training cycle (forward, backward, update)
for epoch in range(100):
y_pred = model(x_data) # forward:predict
loss = criterion(y_pred, y_data) # forward: loss
print(epoch, loss.item())
optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zero
loss.backward() # backward: autograd,自动计算梯度
optimizer.step() # update 参数,即更新w和b的值
# 打印最终训练好的权重和偏置并测试
print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
x_test = torch.tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)
跑的结果:
四个主要流程
注意准备数据X、Y的值必须是一个矩阵
Pytorch不用再人工计算导数,而是要关注如何构建计算图
代码中的一些重点
- In PyTorch, the computational graph is in mini-batch fashion, so X and Y are 3 × 1 Tensors.
- 2.Our model class should be inherit from nn.Module, which is Base class for all neural network modules.
- Member methods __ init __() and forward() have to be implemented.
- Class nn.Linear contain two member Tensors: weight and bias.
- Class nn.Linear has implemented the magic method call(), which enable the instance of the class can be called just like a function. Normally the forward() will be called. Pythonic!!!
- torch.nn.MSELoss also inherit from nn.Module.
- NOTICE:The grad computed by .backward()will be accumulated. So before backward, remember set the grad to ZERO!!!
- 由于魔法函数__call__的实现,使用model(x_data)将会自动调model.forward(x_data)函数
其他的优化器
• torch.optim.Adagrad
• torch.optim.Adam
• torch.optim.Adamax
• torch.optim.ASGD
• torch.optim.LBFGS
• torch.optim.RMSprop
• torch.optim.Rprop
• torch.optim.SGD