PyTorch实战课程(2)——回归问题

该博客介绍了一种使用梯度下降法来拟合数据集的线性回归模型。通过计算误差并逐步调整权重和截距,最终得到在给定点上的最小平均误差。代码实现了从CSV文件读取数据,进行梯度下降迭代,并输出经过100次迭代后的模型参数和误差。
摘要由CSDN通过智能技术生成

 

 

import numpy as np


# y = wx + b


# 计算avg_loss
def compute_error_for_line_given_points(b, w, points):
    totalError = 0
    for i in range(0, len(points)):  # range()左闭右开
        x = points[i, 0]
        y = points[i, 1]
        totalError += (y - (w * x + b)) ** 2
    return totalError / float(len(points))


# 进行梯度下降
def step_gradient(b_current, w_current, points, learningRate):
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        b_gradient += 2 / N * ((w_current * x) + b_current - y)
        w_gradient += 2 / N * ((w_current * x) + b_current - y) * x
    new_b = b_current - learningRate * b_gradient
    new_w = w_current - learningRate * w_gradient
    return [new_b, new_w]


# 运行梯度下降
def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
    b = starting_b
    w = starting_w
    for i in range(0, len(points)):
        b, w = step_gradient(b, w, np.array(points), learning_rate)
    return [b, w]


def run():
    points = np.genfromtxt("data.csv", delimiter=",")
    learning_rate = 0.0001
    initial_b = 0
    initial_w = 0
    num_iterations = 100
    print("Starting gradient descent at b = {0} , w = {1}, error = {2}"
          .format(initial_b, initial_w,
                  compute_error_for_line_given_points(initial_b, initial_w, points)))
    print("Running...")
    [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
    print("After {0} iterations b = {1}, w = {2}, error = {3}"
          .format(num_iterations, b, w,
                  compute_error_for_line_given_points(b, w, points)))


run()

运行结果:

 data.csv数据:

32.5023531.70701
53.426868.7776
61.5303662.56238
47.4756471.54663
59.8132187.23093
55.1421978.21152
52.211879.64197
39.2995759.17149
48.1050475.33124
52.5500171.30088
45.4197355.16568
54.3516382.47885
44.1640562.00892
58.1684775.39287
56.7272181.43619
48.9558960.7236
44.687282.8925
60.2973397.3799
45.6186448.84715
38.8168256.87721
66.1898283.87856
65.41605118.5912
47.4812157.25182
41.5756451.39174
51.8451975.38065
59.3708274.76556
57.3195.45505
63.6155695.22937
46.7376279.05241
50.5567683.43207
52.22463.35879
35.5678341.41289
42.4364876.61734
58.1645496.76957
57.5044574.08413
45.4405366.58814
61.8962277.76848
33.0938350.71959
36.4360162.12457
37.6756560.81025
44.5556152.68298
43.3182858.56982
50.0731582.90598
43.8706161.42471
62.99748115.2442
32.6690445.57059
40.166954.08405
53.5750887.99445
33.8642152.72549
64.7071493.57612
38.1198280.16628
44.5025465.10171
40.5995465.5623
41.7206865.28089
51.0886373.43464
55.078171.13973
41.3777379.10283
62.494786.52054
49.2038984.7427
41.1026959.35885
41.1820261.68404
50.1863969.8476
52.3784586.09829
50.1354959.10884
33.6447169.89968
39.557944.86249
56.1303985.49807
57.3620595.53669
60.2692170.25193
35.6780952.72173
31.5881250.39267
53.6609363.6424
46.6822372.24725
43.1078257.81251
70.34608104.2571
44.4928686.64202
57.5045391.48678
36.9300855.23166
55.8057379.55044
38.9547744.84712
56.9012180.20752
56.868983.14275
34.3331255.72349
59.0497477.63418
57.7882299.05141
54.2823379.12065
51.0887269.5889
50.2828469.5105
44.2117473.68756
38.0054961.3669
32.9404867.17066
53.6916485.6682
68.76573114.8539
46.2309790.12357
68.3193697.91982
50.0301781.53699
49.2397772.11183
50.0395885.23201
48.1498666.22496
25.1284853.45439
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值