深度学习day05 用Pytorch实现线性回归

深度学习day05 用Pytorch实现线性回归

代码模板

import torch

# 步骤一:Prepare dataset
# x,y是矩阵,3行1列 也就是说总共有3个数据,每个数据只有1个特征
x_data = torch.tensor([[1.0], [2.0], [3.0]])
y_data = torch.tensor([[2.0], [4.0], [6.0]])

# 步骤二:Design model using class (inherit from nn.Module)
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        # (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的
        # 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.bias
        self.linear = torch.nn.Linear(1, 1) #设置w、b的初始值分别为1、1

    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred
model = LinearModel()

# 步骤三:Construct loss and optimizer (using Pytorch API)
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)  # model.parameters()自动完成参数的初始化操作

# 步骤四:Training cycle (forward, backward, update)
for epoch in range(100):
    y_pred = model(x_data)  # forward:predict
    loss = criterion(y_pred, y_data)  # forward: loss

    print(epoch, loss.item())
    optimizer.zero_grad()  # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zero

    loss.backward()  # backward: autograd,自动计算梯度
    optimizer.step()  # update 参数,即更新w和b的值

# 打印最终训练好的权重和偏置并测试
print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
x_test = torch.tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)

跑的结果:
在这里插入图片描述

四个主要流程

在这里插入图片描述
注意准备数据X、Y的值必须是一个矩阵
Pytorch不用再人工计算导数,而是要关注如何构建计算图
在这里插入图片描述

代码中的一些重点

  1. In PyTorch, the computational graph is in mini-batch fashion, so X and Y are 3 × 1 Tensors.
  2. 2.Our model class should be inherit from nn.Module, which is Base class for all neural network modules.
  3. Member methods __ init __() and forward() have to be implemented.
  4. Class nn.Linear contain two member Tensors: weight and bias.
  5. Class nn.Linear has implemented the magic method call(), which enable the instance of the class can be called just like a function. Normally the forward() will be called. Pythonic!!!
  6. torch.nn.MSELoss also inherit from nn.Module.
  7. NOTICE:The grad computed by .backward()will be accumulated. So before backward, remember set the grad to ZERO!!!
  8. 由于魔法函数__call__的实现,使用model(x_data)将会自动调model.forward(x_data)函数

其他的优化器

• torch.optim.Adagrad
• torch.optim.Adam
• torch.optim.Adamax
• torch.optim.ASGD
• torch.optim.LBFGS
• torch.optim.RMSprop
• torch.optim.Rprop
• torch.optim.SGD

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值