PyTorch1.0学习 之 手把手实现梯度下降和线性回归

1.分别用numpy和PyTorch1.0实现一个简单的梯度下降

实现环境:PyTorch1.0

##############################
# 不使用PyTorch做一个简单的梯度下降
##############################
import torch

x = 0 

# 学习率
learning_rate= 0.1 

# 迭代次数
epochs = 10

# lambda函数定义一个简单的函数,假装是在算loss :)
y = lambda x: x**2 + 2*x +1 

for epoch in range(epochs):
    dx = 2*x +2 #梯度
    x = x - learning_rate*dx #在梯度上进行更新
    print('x:',x,'y:',y(x))
x: -0.2 y: 0.64
x: -0.36000000000000004 y: 0.40959999999999996
x: -0.488 y: 0.26214400000000004
x: -0.5904 y: 0.16777215999999995
x: -0.67232 y: 0.10737418239999996
x: -0.7378560000000001 y: 0.06871947673599998
x: -0.7902848 y: 0.043980465111040035
x: -0.83222784 y: 0.028147497671065613
x: -0.865782272 y: 0.018014398509481944
x: -0.8926258176 y: 0.011529215046068408
###############################
# PyTorch 实现一个简单的梯度下降
###############################
import torch

# 设定初始值
x = torch.Tensor([0])
x.requires_grad_(True)

# 自己定义一个函数,假装是在算loss好了
y= x**2 + 2*x + 1
learning_rate=torch.Tensor([0.1])
epoches = 15

for epoch in range(epoches):
    y= x**2 + 2*x + 1
    # 反向传播求梯度
    y.backward()
    print('x=',x.data,'y=',y.data)

    x.data= x.data - learning_rate*x.grad.data
    # 在 PyTorch 中梯度如果不清零会积累((因为PyTorch是基于动态图的, 每迭代一次就会留下计算缓存, 到一下次循环时需要手动清楚缓存))
    x.grad.data.zero_()
    

x= tensor([0.]) y= tensor([1.])
x= tensor([-0.2000]) y= tensor([0.6400])
x= tensor([-0.3600]) y= tensor([0.4096])
x= tensor([-0.4880]) y= tensor([0.2621])
x= tensor([-0.5904]) y= tensor([0.1678])
x= tensor([-0.6723]) y= tensor([0.1074])
x= tensor([-0.7379]) y= tensor([0.0687])
x= tensor([-0.7903]) y= tensor([0.0440])
x= tensor([-0.8322]) y= tensor([0.0281])
x= tensor([-0.8658]) y= tensor([0.0180])
x= tensor([-0.8926]) y= tensor([0.0115])
x= tensor([-0.9141]) y= tensor([0.0074])
x= tensor([-0.9313]) y= tensor([0.0047])
x= tensor([-0.9450]) y= tensor([0.0030])
x= tensor([-0.9560]) y= tensor([0.0019])

2.利用PyTorch实现一个简单的线性回归

import torch 
#print(torch.__version__)

# train data 
x_data= torch.arange(1.0,4.0,1.0)
x_data=x_data.view(-1,1)
y_data= torch.arange(2.0,7.0,2.0)
y_data= y_data.view(-1,1)

# 超参数设置
learning_rate=0.1
num_epoches=40

# 线性回归模型
class LinearRegression(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = torch.nn.Linear(1,1)# 1 in and 1 out
        
    def forward(self,x):
        y_pred = self.linear(x)
        return y_pred

model = LinearRegression()

# 定义loss function损失函数和optimizer优化器
# PyTorch0.4以后,使用reduction参数控制损失函数的输出行为
criterion = torch.nn.MSELoss(reduction='mean')
# nn.Parameter - 张量的一种,当它作为一个属性分配给一个Module时,它会被自动注册为一个参数。
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)

# 训练模型
for epoch in range(num_epoches):
    # forward 
    y_pred= model(x_data)
    
    #computing loss 
    loss = criterion(y_pred,y_data)
    
    print(epoch,'epoch\'s loss:',loss.item())
    
    # backward: zero gradients + backward + step
    optimizer.zero_grad()
    loss.backward()  
    optimizer.step() # 执行一步-梯度下降(1-step gradient descent)
    
# testing
x_test=torch.Tensor([4.0])
print("the result of y when x is 4:",model(x_test))
print('model.parameter:',list(model.parameters()))
#print(type(list(model.parameters())))
#print(list(model.parameters())[1])
0 epoch's loss: 42.43438720703125
1 epoch's loss: 0.534014880657196
2 epoch's loss: 0.03221800923347473
3 epoch's loss: 0.024996554479002953
4 epoch's loss: 0.023741230368614197
5 epoch's loss: 0.022612696513533592
6 epoch's loss: 0.021538566797971725
7 epoch's loss: 0.02051546983420849
8 epoch's loss: 0.019540980458259583
9 epoch's loss: 0.01861274614930153
10 epoch's loss: 0.01772867701947689
11 epoch's loss: 0.016886496916413307
12 epoch's loss: 0.016084380447864532
13 epoch's loss: 0.015320354141294956
14 epoch's loss: 0.014592642895877361
15 epoch's loss: 0.013899452984333038
16 epoch's loss: 0.013239205814898014
17 epoch's loss: 0.012610377743840218
18 epoch's loss: 0.012011360377073288
19 epoch's loss: 0.011440831236541271
20 epoch's loss: 0.010897384956479073
21 epoch's loss: 0.010379730723798275
22 epoch's loss: 0.009886668063700199
23 epoch's loss: 0.009417060762643814
24 epoch's loss: 0.00896975677460432
25 epoch's loss: 0.008543687872588634
26 epoch's loss: 0.008137861266732216
27 epoch's loss: 0.007751287426799536
28 epoch's loss: 0.007383116986602545
29 epoch's loss: 0.007032420951873064
30 epoch's loss: 0.006698352284729481
31 epoch's loss: 0.006380194798111916
32 epoch's loss: 0.006077127531170845
33 epoch's loss: 0.00578846363350749
34 epoch's loss: 0.005513496696949005
35 epoch's loss: 0.00525160226970911
36 epoch's loss: 0.005002149846404791
37 epoch's loss: 0.004764544311910868
38 epoch's loss: 0.004538213834166527
39 epoch's loss: 0.0043226489797234535
the result of y when x is 4: tensor([7.8713], grad_fn=<AddBackward0>)
model.parameter: [Parameter containing:
tensor([[1.9255]], requires_grad=True), Parameter containing:
tensor([0.1694], requires_grad=True)]
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值