笔记来自课程《PyTorch深度学习实践》Lecture5
利用pytorch进行深度学习的基本思路/步骤:
1. 准备数据集
2. 定义模型(使用class,继承自nn.Module)
3. 构建loss和optimizer(使用pytorch API)
4. Training cycle(forward,backward,update)
1. 准备数据集
import torch
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
x和y都是3 × 1 的Tensor
梯度下降:
2. 模型设计
class LinearModel(torch.nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearModel()
我们的模型必须继承自nn.Module,nn.Module是所有神经网络的base class。
成员方法__init__()和forword()必须重写。
关于init中的第二行代码,nn.Linear类含有2个成员张量:weight和bias:
关于forward函数中的第一行:Class nn.Linear has implemented the magic method __call__(), which enable the instance of the class can be called just like a function. Normally the forward() will be called.
最后一行,创建了一个LinearModel类的实例。
3. 构建Loss和Optimizer
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
4. Training Cycle
for epoch in range(100):
y_pred = model(x_data) # Forward: Predict
loss = criterion(y_pred, y_data) # Forward: Loss
print(epoch, loss)
optimizer.zero_grad() # Notice !!!
loss.backward() # Backward: Autograd
optimizer.step() # Optimizer.step()
Notice:The grad computed by .backward() will be accumulated. So before backward, remember set the grad to ZERO!!!
5. Test Model
# Output weight and bias
print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
# Test Model
x_test = torch.Tensor([[4.0]]) y_test = model(x_test)
print('y_pred = ', y_test.data)
输出结果为:
代码整体结构 be like: