Pytorch专题实战——线性回归(Linear Regression)

1.计算流程

 1)设计模型: Design model (input, output, forward pass with different layers)   
 2) 构建损失函数与优化器:Construct loss and optimizer
 3) 循环:Training loop
      - Forward = compute prediction and loss
      - Backward = compute gradients
       - Update weights

2.Pytorch搭建线性回归模型

2.1.导入必要模块

import torch
import torch.nn as nn

2.2.构造训练数据

X = torch.tensor([[1],[2],[3],[4]], dtype=torch.float32)  
Y = torch.tensor([[2],[4],[6],[8]], dtype=torch.float32)
n_samples, n_features = X.shape  #4, 1(4个1维的样本)

2.3.测试数据及输入输出神经元个数

X_test = torch.tensor([5], dtype=torch.float32)
input_size = n_features
output_size = n_features

2.4.搭建模型并实例化

class LinearRegression(nn.Module):
    def __init__(self,input_dim, output_dim):
        super(LinearRegression,self).__init__()
        self.lin = nn.Linear(input_dim, output_dim)
    def forward(self, x):
        return self.lin(x)  
model = LinearRegression(input_size, output_size)   

2.5.训练

print(f'Prediction before training:f(5)={model(X_test).item():.3f}')
learning_rate = 0.1
n_iters = 200

loss = nn.MSELoss()  #损失函数
optimizer = torch.optim.SGD(model.parameters(), learning_rate) #优化器

for epoch in range(n_iters):
    y_predicted = model(X)
    l = loss(Y, y_predicted)
    l.backward()
    optimizer.step()
    optimizer.zero_grad()
    if epoch%10 == 0:
        [w,b] = model.parameters()
        print('epoch',epoch+1,': w =',w.item(),'loss=',l)
print(f'Prediction after training:f(5)={model(X_test).item():.3f}')

在这里插入图片描述

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值