仅使用python标准库(不使用numpy)写一个小批量梯度下降的线性回归算法

看到一个有意思的题目:仅使用python的标准库,完成一个小批量梯度下降的线性回归算法

平常使用numpy这样的计算库习惯了,只允许使用标准库还有点不习惯,下面就使用这个过程来写一个。

import random
from typing import List


# 生成测试数据
def generate_data(num_samples: int, weights: List[float], bias: float, noise=0.1) -> (List[List[float]], List[float]):
    X = [[random.uniform(-10, 10) for _ in range(len(weights))] for _ in range(num_samples)]
    y = [sum(w * x for w, x in zip(weights, x_i)) + bias + random.uniform(-noise, noise) for x_i in X]
    return X, y

# 计算损失
def mse(y_true: List[float], y_pred: List[float]):
    return 0.5 * sum((yt - yp) for yt, yp in zip(y_true, y_pred)) ** 2

# 将矩阵转置
def transpose(mat: List[List[float]]):
    row, col = len(mat), len(mat[0])
    # 固定列,访问行
    result = [
        [mat[r][c] for r in range(row)] for c in range(col)
    ]
    return result

# 计算矩阵乘法
def matmul(mat: List[List[float]], vec: List[float]):
    return [sum(r * c for r, c in zip(row, vec)) for row in mat]

# 计算梯度
def compute_grad(y_true_batch: List[float], y_pred_batch: List[float], x_batch: List[List[float]]):
    batch_size = len(y_true_batch)
    residual = [yt - yp for yt, yp in zip(y_true_batch, y_pred_batch)]
    # 根据 y = x @ w + b
    # grad_w = -x.T @ residual
    grad_w = matmul(transpose(x_batch), residual)
    grad_w = [-gw / batch_size for gw in grad_w]
    grad_b = -sum(residual) / batch_size
    
    # grad_w: List[float]
    # grad_b: float
    return grad_w, grad_b

# 开启训练
def train():
    lr = 0.01
    epochs = 50
    batch_size = 16
    dim_feat = 3
    num_samples = 500
    
    weights = [random.random() * 0.1 for _ in range(dim_feat)]
    bias = random.random() * 0.1
    
    print('original params')
    print('w:', weights)
    print('b:', bias)
    
    X, y = generate_data(num_samples, weights, bias, noise=0.1)
    
    for epoch in range(epochs):
        for i in range(0, num_samples, batch_size):
            x_batch = X[i:i+batch_size]
            y_batch = y[i:i+batch_size]
            
            y_pred = [item + bias for item in matmul(x_batch, weights)]
            
            loss = mse(y_batch, y_pred)
            
            grad_w, grad_b = compute_grad(y_batch, y_pred, x_batch)
            
            weights = [w - lr * gw for w, gw in zip(weights, grad_w)]
            bias -= lr * grad_b
            
        print(f'Epoch: {epoch + 1}, Loss = {loss:.3f}')
        

    print('trained params')
    print('w:', weights)
    print('b:', bias)
    
train()

输出结果如下

original params
w: [0.04845598598148951, 0.007741816562531545, 0.02436678108587098]
b: 0.01644073086522535
Epoch: 1, Loss = 0.000
Epoch: 2, Loss = 0.000
Epoch: 3, Loss = 0.000
Epoch: 4, Loss = 0.000
Epoch: 5, Loss = 0.000
Epoch: 6, Loss = 0.000
Epoch: 7, Loss = 0.000
Epoch: 8, Loss = 0.000
Epoch: 9, Loss = 0.000
Epoch: 10, Loss = 0.000
Epoch: 11, Loss = 0.000
Epoch: 12, Loss = 0.000
Epoch: 13, Loss = 0.000
Epoch: 14, Loss = 0.000
Epoch: 15, Loss = 0.000
Epoch: 16, Loss = 0.000
Epoch: 17, Loss = 0.000
Epoch: 18, Loss = 0.000
Epoch: 19, Loss = 0.000
Epoch: 20, Loss = 0.000
Epoch: 21, Loss = 0.000
Epoch: 22, Loss = 0.000
Epoch: 23, Loss = 0.000
Epoch: 24, Loss = 0.000
Epoch: 25, Loss = 0.000
Epoch: 26, Loss = 0.000
Epoch: 27, Loss = 0.000
Epoch: 28, Loss = 0.000
Epoch: 29, Loss = 0.000
Epoch: 30, Loss = 0.000
Epoch: 31, Loss = 0.000
Epoch: 32, Loss = 0.000
Epoch: 33, Loss = 0.000
Epoch: 34, Loss = 0.000
Epoch: 35, Loss = 0.000
Epoch: 36, Loss = 0.000
Epoch: 37, Loss = 0.000
Epoch: 38, Loss = 0.000
Epoch: 39, Loss = 0.000
Epoch: 40, Loss = 0.000
Epoch: 41, Loss = 0.000
Epoch: 42, Loss = 0.000
Epoch: 43, Loss = 0.000
Epoch: 44, Loss = 0.000
Epoch: 45, Loss = 0.000
Epoch: 46, Loss = 0.000
Epoch: 47, Loss = 0.000
Epoch: 48, Loss = 0.000
Epoch: 49, Loss = 0.000
Epoch: 50, Loss = 0.000
trained params
w: [0.05073234817652038, 0.007306286342947243, 0.023218625946243507]
b: 0.016648404245261664

可以看到,结果还是不错的

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值