Pytorch深度学习实践笔记5(b站刘二大人)

🎬个人简介:一个全栈工程师的升级之路!
📋个人专栏:pytorch深度学习
🎀CSDN主页 发狂的小花
🌄人生秘诀:学习的本质就是极致重复!

视频来自【b站刘二大人】

目录

1 Linear Regression

2 Dataloader 数据读取机制

3 代码


1 Linear Regression


使用Pytorch实现,步骤如下:
PyTorch Fashion(风格)

  1. prepare dataset
  2. design model using Class ,前向传播,计算y_pred
  3. Construct loss and optimizer,计算loss,Optimizer 更新w
  4. Training cycle (forward,backward,update)




2 Dataloader 数据读取机制

 

  • Pytorch数据读取机制

一文搞懂Pytorch数据读取机制!_pytorch的batch读取数据-CSDN博客

  • 小批量数据读取
import torch  
import torch.utils.data as Data  
  
BATCH_SIZE = 3

x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])

dataset = Data.TensorDataset(x_data,y_data)

loader = Data.DataLoader(  
    dataset=dataset,  
    batch_size=BATCH_SIZE,  
    shuffle=True,  
    num_workers=0  
)
  
for epoch in range(3):  
    for step, (batch_x, batch_y) in enumerate(loader):  
        print('epoch', epoch,  
              '| step:', step,  
              '| batch_x', batch_x,  
              '| batch_y:', batch_y)  




3 代码

import torch
import torch.utils.data as Data 
import matplotlib.pyplot as plt 
# prepare dataset

BATCH_SIZE = 3

epoch_list = []
loss_list = []

x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])

dataset = Data.TensorDataset(x_data,y_data)

loader = Data.DataLoader(  
    dataset=dataset,  
    batch_size=BATCH_SIZE,  
    shuffle=True,  
    num_workers=0  
)
 
#design model using class
"""
our model class should be inherit from nn.Module, which is base class for all neural network modules.
member methods __init__() and forward() have to be implemented
class nn.linear contain two member Tensors: weight and bias
class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can
be called just like a function.Normally the forward() will be called 
"""
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        # (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的
        # 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.bias
        self.linear = torch.nn.Linear(1, 1)
 
    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred
 
model = LinearModel()
 
# construct loss and optimizer
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction = 'sum')
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) 
 
# training cycle forward, backward, update
for epoch in range(1000):  
    for iteration, (batch_x, batch_y) in enumerate(loader):  
        y_pred = model(batch_x) # forward
        loss = criterion(y_pred, batch_y) # backward
        # print("epoch: ",epoch, " iteration: ",iteration," loss: ",loss.item())

        optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zero
        loss.backward() # backward: autograd,自动计算梯度
        optimizer.step() # update 参数,即更新w和b的值
    print("epoch: ",epoch, " loss: ",loss.item())
    epoch_list.append(epoch)
    loss_list.append(loss.data.item())
    if (loss.data.item() < 1e-7):
        print("Epoch: ",epoch+1,"loss is: ",loss.data.item(),"(w,b): ","(",model.linear.weight.item(),",",model.linear.bias.item(),")")
        break

print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
 
x_test = torch.tensor([[10.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)

plt.plot(epoch_list,loss_list)
plt.title("SGD")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.savefig("./data/pytorch4.png")

  • 几种不同的优化器对应的结果:

Pytorch优化器全总结(三)牛顿法、BFGS、L-BFGS 含代码​

pytorch LBFGS_lbfgs优化器-CSDN博客​

scg.step() missing 1 required positiona-CSDN博客​



 



 



 



 

  • LFBGS 代码

import torch
import torch.utils.data as Data 
import matplotlib.pyplot as plt 
# prepare dataset

BATCH_SIZE = 3

epoch_list = []
loss_list = []

x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])

dataset = Data.TensorDataset(x_data,y_data)

loader = Data.DataLoader(  
    dataset=dataset,  
    batch_size=BATCH_SIZE,  
    shuffle=True,  
    num_workers=0  
)
 
#design model using class
"""
our model class should be inherit from nn.Module, which is base class for all neural network modules.
member methods __init__() and forward() have to be implemented
class nn.linear contain two member Tensors: weight and bias
class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can
be called just like a function.Normally the forward() will be called 
"""
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        # (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的
        # 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.bias
        self.linear = torch.nn.Linear(1, 1)
 
    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred
 
model = LinearModel()
 
# construct loss and optimizer
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction = 'sum')
optimizer = torch.optim.LBFGS(model.parameters(), lr = 0.1) # model.parameters()自动完成参数的初始化操作,这个地方我可能理解错了
 

loss = torch.Tensor([1000.])
# training cycle forward, backward, update
for epoch in range(1000):  
    for iteration, (batch_x, batch_y) in enumerate(loader):
        def closure():
            y_pred = model(batch_x) # forward
            loss = criterion(y_pred, batch_y) # backward
            # print("epoch: ",epoch, " iteration: ",iteration," loss: ",loss.item())

            optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zero
            loss.backward() # backward: autograd,自动计算梯度
            return loss
        loss = closure()
        optimizer.step(closure) # update 参数,即更新w和b的值
        
    print("epoch: ",epoch, " loss: ",loss.item())
    epoch_list.append(epoch)
    loss_list.append(loss.data.item())
    if (loss.data.item() < 1e-7):
        print("Epoch: ",epoch+1,"loss is: ",loss.data.item(),"(w,b): ","(",model.linear.weight.item(),",",model.linear.bias.item(),")")
        break

print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
 
x_test = torch.tensor([[10.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)

plt.plot(epoch_list,loss_list)
plt.title("LBFGS(lr = 0.1)")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.savefig("./data/pytorch4.png")

  • Rprop:

Rprop 优化方法(弹性反向传播),适用于 full-batch,不适用于 mini-batch,因而在 mini-batch 大行其道的时代里,很少见到。
优点:它可以自动调节学习率,不需要人为调节
缺点:仍依赖于人工设置一个全局学习率,随着迭代次数增多,学习率会越来越小,最终会趋近于0
结果:修改学习率和epoch均不能使其表现良好,无法满足1e-7精度条件下收敛



 

🌈我的分享也就到此结束啦🌈
如果我的分享也能对你有帮助,那就太好了!
若有不足,还请大家多多指正,我们一起学习交流!
📢未来的富豪们:点赞👍→收藏⭐→关注🔍,如果能评论下就太惊喜了!
感谢大家的观看和支持!最后,☺祝愿大家每天有钱赚!!!欢迎关注、关注!

  • 24
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

发狂的小花

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值