机器学习训练算法十三(模型训练算法-PyTorch实验)

14 篇文章 1 订阅
4 篇文章 0 订阅


在机器学习中经常用PyTorch来训练模型,为了方便理解底层原理,本文借用简单的案例+python程序,来搜索模型参数。

1、测试数据

为了方便研究最小二乘法问题,现提供如下车辆时间和行驶距离的观测数据用于讨论和分析。
在这里插入图片描述

2、数学模型

令:车辆的初速度 v 0 v_0 v0 、加速度 a a a、时间 t t t、距离 y ^ \hat{y} y^ ,第 i i i组观测值为 ( t i , y ^ i ) (t_i,\hat{y}_i ) (ti,y^i),时间 t t t与距离 y ^ \hat{y} y^满足的数学模型如下:
y ^ = f ( v 0 , a , t ) = 1 2 a t 2 + v 0 t ( 公式 59 ) \hat{y}=f(v_0,a,t)=\frac{1}{2}at^2+v_0t \qquad (公式59) y^=f(v0,a,t)=21at2+v0t(公式59) min F ( a , v 0 ) = 1 2 ∑ i = 1 10 ( f ( v 0 , a , t i ) − y i ^ ) 2 ( 公式 60 ) \text{min}F(a,v_0)=\frac{1}{2}\sum_{i=1}^{10} (f(v_0,a,t_i)-\hat{y_i})^2 \qquad (公式60) minF(a,v0)=21i=110(f(v0,a,ti)yi^)2(公式60)
观察公式 60 发现, y i ^ \hat{y_i} yi^ t i t_i ti 是 常 数 , a a a v 0 v_0 v0 是变量,可知该最小二乘问题是函数 F F F关于 a a a v v v 的最小极值问题。

3、PyTorch实验

3.1、求导方法(举例)

3.1.1、代码

如下代码是对二次函数求导的代码例子

import torch

# 定义二次函数 y = 0.5*3*x*x + 10*x
def my_function(x):
    a = 3
    b = 10
    return 0.5 * a * x * x + b * x

# 定义测试数据,并将其设置为需要梯度计算
x_test = torch.tensor([[0.0], [1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0], [9.0], [10.0]], requires_grad=True)

# 计算函数的输出
y_test = my_function(x_test)

# 计算 y 对 x 的梯度
y_test.backward(torch.ones_like(y_test))

# 提取梯度
dy_dx = x_test.grad
print(f'y对x的导数:{dy_dx}')

# 设置学习率,计算梯度下降步长
learning_rate = 0.001
print(f'当学习率为 {learning_rate} 的时候 , 梯度下降步长:{learning_rate * dy_dx}')

# PyTorch 不需要显式释放资源,因为它会自动管理内存

3.1.1、日志

y对x的导数:tensor([[10.],
        [13.],
        [16.],
        [19.],
        [22.],
        [25.],
        [28.],
        [31.],
        [34.],
        [37.],
        [40.]])
当学习率为 0.001 的时候 , 梯度下降步长:tensor([[0.0100],
        [0.0130],
        [0.0160],
        [0.0190],
        [0.0220],
        [0.0250],
        [0.0280],
        [0.0310],
        [0.0340],
        [0.0370],
        [0.0400]])

3.2、求解数学模型(方法1)

3.2.1、代码

import torch

# 定义二次函数 y = 0.5*a*x*x + v0*x
def model_function(a, v0, x):
    # 计算二次函数的输出
    return 0.5 * a * x**2 + v0 * x

# 定义损失函数
def custom_loss(y_true, y_pred):
    return torch.mean((y_true - y_pred)**2)

# 准备测试数据
x_test = torch.tensor([[0.0], [1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0], [9.0], [10.0]])
y_test = torch.tensor([[0.0], [11.5], [26], [43.5], [64.12], [87.57], [114.12], [143.5], [176.3], [211.5], [250.12]])
learning_rate_test = 0.001

# 初始化待训练的模型参数
arg_a = torch.tensor(1.0, dtype=torch.float32, requires_grad=True)
arg_v0 = torch.tensor(1.0, dtype=torch.float32, requires_grad=True)

# 使用梯度下降优化器
optimizer = torch.optim.SGD([arg_a, arg_v0], lr=learning_rate_test)

# 迭代优化参数
for i in range(10000):
    y_test1 = model_function(arg_a, arg_v0, x_test)
    loss_value = custom_loss(y_test, y_test1)
    
    # 求得导数
    loss_value.backward()

    # 手动优化参数
    # with torch.no_grad():
    #     arg_a -= learning_rate_test * arg_a.grad
    #     arg_v0 -= learning_rate_test * arg_v0.grad
    # arg_a.grad.zero_()
    # arg_v0.grad.zero_()

    # 优化器来优化参数
    optimizer.step()
    optimizer.zero_grad()

    print(f'{i}->损失函数值: [ {loss_value.item()} ]')
    if loss_value.item() < 0.01:
        break

# 打印优化后的参数值
print("Optimized parameters:")
print(f'a = {arg_a.item()}')
print(f'v0 = {arg_v0.item()}')

3.2.2、日志

1270->损失函数值: [ 0.01044243574142456 ]
1271->损失函数值: [ 0.010407672263681889 ]
1272->损失函数值: [ 0.010373339988291264 ]
1273->损失函数值: [ 0.010339236818253994 ]
1274->损失函数值: [ 0.010305803269147873 ]
1275->损失函数值: [ 0.010271724313497543 ]
1276->损失函数值: [ 0.01023818925023079 ]
1277->损失函数值: [ 0.010205076076090336 ]
1278->损失函数值: [ 0.010173010639846325 ]
1279->损失函数值: [ 0.010140367783606052 ]
1280->损失函数值: [ 0.010108280926942825 ]
1281->损失函数值: [ 0.010076860897243023 ]
1282->损失函数值: [ 0.01004437729716301 ]
1283->损失函数值: [ 0.010013369843363762 ]
1284->损失函数值: [ 0.009981896728277206 ]
Optimized parameters1:
a = 3.0083558559417725
v0 = 9.97811508178711

3.3、求解数学模型(方法2)

3.3.1、代码

import torch
import torch.nn as nn
import torch.optim as optim

# 自定义损失函数
def custom_loss(y_true, y_pred):
    return torch.mean(torch.square(y_true - y_pred))

# 定义模型类
class ModelFunction(nn.Module):
    def __init__(self):
        super(ModelFunction, self).__init__()
        # 初始化参数,假设a[0]和a[1]的初始值为1.0
        self.a = nn.Parameter(torch.ones(2))

    def forward(self, inputs):
        # 计算二次函数的输出
        return 0.5*self.a[0] * inputs**2 + self.a[1] * inputs

# 构建模型
model = ModelFunction()

# 定义优化器和损失函数
custom_optimizer = optim.SGD(model.parameters(), lr=0.001)

# 准备测试数据,f(x)=0.5*a*x*x+b*x (a=3,b=10)
x_test = torch.tensor([[0.0], [1.0], [2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0],[10.0]])
y_test = torch.tensor([[0.0], [11.5], [26],[43.5],[64.12],[87.57],[114.12],[143.5],[176.3],[211.5],[250.12]])

# 添加 EarlyStopping 回调函数(在 PyTorch 中需要手动实现)
class EarlyStopping:
    def __init__(self, patience=10, min_delta=0.001):
        self.patience = patience
        self.min_delta = min_delta
        self.counter = 0
        self.best_loss = float('inf')

    def __call__(self, current_loss):
        
        if  self.best_loss - current_loss < self.min_delta:
            self.counter += 1
            print(f"patience counter = {self.counter} , current_delta( {self.best_loss - current_loss} ) < min_delta( {self.min_delta} ) ");
        else:
            self.best_loss = current_loss
            self.counter = 0
            
        if self.counter >= self.patience:
            return True
        else:
            return False

# 训练模型
early_stopping_callback = EarlyStopping(patience=10, min_delta=0.001)
for epoch in range(2000):
    model.train()
    outputs = model(x_test)
    loss = custom_loss(outputs, y_test)
    custom_optimizer.zero_grad()
    loss.backward()
    custom_optimizer.step()
    print(f"Epoch {epoch+1}/2000 - loss: {loss.item()}")
    if early_stopping_callback(loss.item()):
        print(f"Early stopping on epoch {epoch+1}")
        break

# 评估模型
model.eval()
with torch.no_grad():
    loss = custom_loss(model(x_test), y_test)
    print(f"The loss on the test data is: {loss.item()}")

# 获取自定义层的权重
print([param.detach().numpy() for param in model.parameters()])

3.3.2、日志

Epoch 1140/2000 - loss: 0.018619749695062637
patience counter = 7 , current_delta( 0.0007340926676988602 ) < min_delta( 0.001 ) 
Epoch 1141/2000 - loss: 0.018518483266234398
patience counter = 8 , current_delta( 0.0008353590965270996 ) < min_delta( 0.001 ) 
Epoch 1142/2000 - loss: 0.018417643383145332
patience counter = 9 , current_delta( 0.0009361989796161652 ) < min_delta( 0.001 ) 
Epoch 1143/2000 - loss: 0.018318505957722664
Epoch 1144/2000 - loss: 0.0182188767939806
patience counter = 1 , current_delta( 9.962916374206543e-05 ) < min_delta( 0.001 ) 
Epoch 1145/2000 - loss: 0.018120266497135162
patience counter = 2 , current_delta( 0.00019823946058750153 ) < min_delta( 0.001 ) 
Epoch 1146/2000 - loss: 0.018023796379566193
patience counter = 3 , current_delta( 0.00029470957815647125 ) < min_delta( 0.001 ) 
Epoch 1147/2000 - loss: 0.01792696863412857
patience counter = 4 , current_delta( 0.0003915373235940933 ) < min_delta( 0.001 ) 
Epoch 1148/2000 - loss: 0.01783156581223011
patience counter = 5 , current_delta( 0.0004869401454925537 ) < min_delta( 0.001 ) 
Epoch 1149/2000 - loss: 0.017736775800585747
patience counter = 6 , current_delta( 0.0005817301571369171 ) < min_delta( 0.001 ) 
Epoch 1150/2000 - loss: 0.01764201745390892
patience counter = 7 , current_delta( 0.0006764885038137436 ) < min_delta( 0.001 ) 
Epoch 1151/2000 - loss: 0.01754889264702797
patience counter = 8 , current_delta( 0.0007696133106946945 ) < min_delta( 0.001 ) 
Epoch 1152/2000 - loss: 0.017455540597438812
patience counter = 9 , current_delta( 0.0008629653602838516 ) < min_delta( 0.001 ) 
Epoch 1153/2000 - loss: 0.01736423559486866
patience counter = 10 , current_delta( 0.0009542703628540039 ) < min_delta( 0.001 ) 
Early stopping on epoch 1153
The loss on the test data is: 0.017272843047976494
[array([3.0155222, 9.948215 ], dtype=float32)]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值