Pytorch 深度学习实践第5讲

四、用Pytorch实现线性回归

课程链接:Pytorch 深度学习实践——用Pytorch实现线性回归

1、Pytorch Fashion

①准备数据集
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])
②用类(class)设计模型——构造计算图:继承自nn.Module——所有神经网络模块的基类
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__() #调用父类的构造函数
        # 构造对象,Linear类包括两个对象:w和b,参数:input size、output size、bias = True/False(Default:True)
        self.linear = torch.nn.Linear(1, 1)

    def forward(self, x):
        #nn.Linear类内部实现了__call__()方法,使得该类的实例像function一样被调用——callable
        y_pred = self.linear(x)
        return y_pred

model = LinearModel()   #创建一个LinearMoel类的实例,同样是callable
③使用 Pytorch API 构造损失函数和优化器(optimizer)
#MSELoss类参数:size_average=True/False是否求均值、reduce=True/False是否降维
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
④编写训练的周期:forward、backward、update
#training cycle
for epoch in range(100):
    y_pred = model(x_data)  #forward:predict
    loss = criterion(y_pred, y_data)    #forward:loss

    epoch_list.append(epoch)
    loss_list.append(loss.item())

    optimizer.zero_grad()   #在反向传播之前一定要释放梯度
    loss.backward() #反向传播Autograd
    optimizer.step()    #update
    if(epoch % 10 == 0):    #output the weight and bias
        print('w = ', model.linear.weight.item())
        print('b = ', model.linear.bias.item())
        print('Epoch =' + '\t' + str(epoch) + '\t' + str(format(loss.item(), '.4f')))
        print('------------------------')
⑤测试test
#test model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print("predict after training: ", x_test.item(), y_test.item())

2、结果展示

w = 1.1723095178604126
b = 0.4983179271221161
Epoch = 0 Loss = 11.8231


w = 1.6954121589660645
b = 0.668215811252594
Epoch = 10 Loss = 0.2008


w = 1.7248144149780273
b = 0.6251384615898132
Epoch = 20 Loss = 0.1707


w = 1.7441707849502563
b = 0.5815526247024536
Epoch = 30 Loss = 0.1477


w = 1.7620357275009155
b = 0.5409485697746277
Epoch = 40 Loss = 0.1278


w = 1.7786508798599243
b = 0.503178596496582
Epoch = 50 Loss = 0.1106


w = 1.794105887413025
b = 0.46804574131965637
Epoch = 60 Loss = 0.0957


w = 1.8084818124771118
b = 0.43536585569381714
Epoch = 70 Loss = 0.0828


w = 1.8218539953231812
b = 0.4049677550792694
Epoch = 80 Loss = 0.0716


w = 1.8342925310134888
b = 0.37669217586517334
Epoch = 90 Loss = 0.0620


predict after training: 4.0 7.731907367706299
在这里插入图片描述

结果分析:在迭代100次以后,可以看到最终的预测值并不是十分准确,将迭代次数提升为1000次,再来看一下结果:

w = 0.4194898009300232
b = 0.4275916814804077
Epoch = 0 Loss = 61.1046


w = 1.7993148565292358
b = 0.456204354763031
Epoch = 100 Loss = 0.0909


w = 1.9026858806610107
b = 0.22121794521808624
Epoch = 200 Loss = 0.0214


w = 1.9528114795684814
b = 0.10727071017026901
Epoch = 300 Loss = 0.0050


w = 1.9771177768707275
b = 0.052016764879226685
Epoch = 400 Loss = 0.0012


w = 1.9889039993286133
b = 0.02522360160946846
Epoch = 500 Loss = 0.0003


w = 1.994619369506836
b = 0.012231146916747093
Epoch = 600 Loss = 0.0001


w = 1.997390866279602
b = 0.0059309788048267365
Epoch = 700 Loss = 0.0000


w = 1.9987348318099976
b = 0.0028759825509041548
Epoch = 800 Loss = 0.0000


w = 1.9993865489959717
b = 0.0013944411184638739
Epoch = 900 Loss = 0.0000


predict after training: 4.0 7.99948263168335
在这里插入图片描述

此时 y ^ \hat{y} y^的值与 y y y就十分接近了。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值