(课程笔记)深度学习入门 - 4 - BackPropagation

一、反向传播知识点补充

        【补充一】LossFcn对网络参数的偏导数的大小有什么含义?

        LossFcn对网络参数的偏导数大小,是用于表示哪个参数对损失值的贡献敏感度较大。如果某个参数的偏导数较大,说明网络在这个参数上对损失值的影响具有较高的敏感性。换句话说,这个参数对于调整误差和提高网络性能起着更重要的作用。

        【补充二】LossFcn需要对哪些网络参数进行偏导数的计算?

        总的来说,需要计算3个:权重,偏置,输入。

        【  GPT解析  】

        反向传播(Backpropagation)是一种用于计算神经网络中各层参数的梯度信息的方法。在反向传播算法中,需要计算损失函数对网络参数的偏导数,以便更新参数来最小化损失函数。

        对于一个典型的神经网络模型,反向传播需要计算损失函数对以下网络参数的偏导数:

        (1)权重参数(Weights):网络中每个连接的权重都需要计算其关于损失函数的偏导数。这包括输入层到隐藏层之间的权重和隐藏层到输出层之间的权重。

        (2)偏置项(Biases):每个神经元都有一个偏置项,需要计算其关于损失函数的偏导数。偏置项对应于每个神经元的阈值,用于调整神经元的激活状态。

        (3)输入特征(Input features):如果损失函数与输入特征相关,那么也需要计算损失函数对输入特征的偏导数。这在一些情况下可以用于特征选择和重要性评估。

        通过使用链式法则,反向传播将逐层计算偏导数,从输出层向输入层反向传播梯度。在每一层中,根据该层的前向计算结果和梯度信息,可以计算出该层参数的偏导数。

        需要注意的是,反向传播只能计算可微分的损失函数对参数的偏导数。因此,某些非可微的激活函数(如ReLU的导数在零点处为未定义)可能需要使用近似的方式来处理。

二、代码实现与结论记录

import torch

# 本节是要用反向传播算法训练模型,然后预测x=4的情况下,y是多少

x_data_set = [1.0, 2.0, 3.0]
y_data_set = [2.0, 4.0, 6.0]

# 首先设置好w的初始张量,并将w设置为需要计算梯度
w = torch.tensor([1.0])
w.requires_grad = True


# 模型定义
def forward(x):
    system_model = w * x
    return system_model


# 损失函数定义,此处的y是y_label
def loss(x, y):
    y_predict = forward(x)
    loss_value = (y_predict - y) ** 2
    return loss_value


print(f"还未开始训练之时,x = 4,y的预测值为 {forward(4).item()}")

alpha = 0.03
loss_value = 0

for epoch in range(30):

    for x_data, y_label in zip(x_data_set, y_data_set):
        # 计算当前的loss值
        loss_value = loss(x_data, y_label)

        # 计算loss对w的梯度,自动求导
        # 执行完BP算法后,可以通过w.grad查询Loss对w的梯度张量
        loss_value.backward()

        # 更新张量w,此处使用的是不产生计算图的计算模式,如果写成w = w - alpha * w.grad会产生计算图从而影响w的值
        # (原来的写法)w.data = w.data - alpha * w.grad.data
        w.data = w.data.detach() - alpha * w.grad.data

        # 每执行完一次记得将w的梯度清零,否则梯度会被累加
        w.grad.data.zero_()

    print(f"训练次数Epoch: {epoch + 1}, 损失值Loss: {loss_value.item():.3f}, 此时权重w: {w.item():.3f}")

print(f"训练完毕,权重w更新为: {w.item():.3f},此时y的预测值为 {forward(4).item()}")


【结论记录】

还未开始训练之时,x = 4,y的预测值为 4.0
训练次数Epoch: 1, 损失值Loss: 4.593, 此时权重w: 1.671
训练次数Epoch: 2, 损失值Loss: 0.496, 此时权重w: 1.892
训练次数Epoch: 3, 损失值Loss: 0.054, 此时权重w: 1.965
训练次数Epoch: 4, 损失值Loss: 0.006, 此时权重w: 1.988
训练次数Epoch: 5, 损失值Loss: 0.001, 此时权重w: 1.996
训练次数Epoch: 6, 损失值Loss: 0.000, 此时权重w: 1.999
训练次数Epoch: 7, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 8, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 9, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 10, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 11, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 12, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 13, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 14, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 15, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 16, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 17, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 18, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 19, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 20, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 21, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 22, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 23, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 24, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 25, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 26, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 27, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 28, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 29, 损失值Loss: 0.000, 此时权重w: 2.000
训练次数Epoch: 30, 损失值Loss: 0.000, 此时权重w: 2.000
训练完毕,权重w更新为: 2.000,此时y的预测值为 8.0

三、(w1,w2,b)存在的反向传播代码实现

import torch

x_data_set = [1.0, 2.0, 3.0]
y_data_set = [2.0, 4.0, 6.0]

# 首先设置好w的初始张量,并将w设置为需要计算梯度
w1 = torch.tensor([1.0])
w2 = torch.tensor([1.0])
b = torch.tensor([1.0])
w1.requires_grad = True
w2.requires_grad = True
b.requires_grad = True


# 模型定义
def forward(x):
    sys_model = w1 * (x ** 2) + w2 * x + b
    return sys_model


# 损失函数定义
def loss(x, y):
    y_predict = forward(x)
    loss_value = (y_predict - y) ** 2
    return loss_value


print(f"还未训练之时,预测值y: {forward(4).item()}")

alpha = 0.03
loss_value = 0

for epoch in range(100):
    for x_data, y_label in zip(x_data_set, y_data_set):
        # 计算当前的loss值
        loss_value = loss(x_data, y_label)

        # 计算loss对w1, w2, b的梯度,自动求导,并可以通过.grad访问张量
        loss_value.backward()

        # 更新权重偏置张量,此处使用的是不产生计算图的计算模式,如果写成X = X - alpha * X.grad会产生计算图从而影响X的值
        w1.data = w1.data.detach() - alpha * w1.grad.data
        w2.data = w2.data.detach() - alpha * w2.grad.data
        b.data = b.data.detach() - alpha * b.grad.data

        # 每执行完一次记得将w的梯度清零,否则梯度会被累加
        w1.grad.data.zero_()
        w2.grad.data.zero_()
        b.grad.data.zero_()

    print(f"训练次数Epoch:{epoch+1}, 损失值Loss:{loss_value.item():.3f}, 各参数数值w1:{w1.item():.3f}, w2:{w2.item():.3f}, b:{b.item():.3f}")

print(f"训练完毕,权重w1更新为: {w1.item():.3f},权重w2更新为: {w2.item():.3f},偏置b更新为: {b.item():.3f},此时y的预测值为 {forward(4).item()}")

【结论记录】

还未训练之时,预测值y: 21.0
训练次数Epoch:1, 损失值Loss:0.190, 各参数数值w1:0.556, w2:0.709, b:0.811
训练次数Epoch:2, 损失值Loss:0.633, 各参数数值w1:0.021, w2:0.510, b:0.734
训练次数Epoch:3, 损失值Loss:3.268, 各参数数值w1:-0.466, w2:0.452, b:0.781
训练次数Epoch:4, 损失值Loss:5.533, 各参数数值w1:-0.783, w2:0.542, b:0.933
训练次数Epoch:5, 损失值Loss:5.345, 各参数数值w1:-0.857, w2:0.752, b:1.147
训练次数Epoch:6, 损失值Loss:2.971, 各参数数值w1:-0.680, w2:1.024, b:1.364
训练次数Epoch:7, 损失值Loss:0.581, 各参数数值w1:-0.310, w2:1.287, b:1.527
训练次数Epoch:8, 损失值Loss:0.096, 各参数数值w1:0.152, w2:1.475, b:1.596
训练次数Epoch:9, 损失值Loss:1.479, 各参数数值w1:0.582, w2:1.544, b:1.557
训练次数Epoch:10, 损失值Loss:3.000, 各参数数值w1:0.872, w2:1.483, b:1.424
训练次数Epoch:11, 损失值Loss:3.043, 各参数数值w1:0.956, w2:1.314, b:1.234
训练次数Epoch:12, 损失值Loss:1.622, 各参数数值w1:0.820, w2:1.087, b:1.037
训练次数Epoch:13, 损失值Loss:0.214, 各参数数值w1:0.510, w2:0.862, b:0.883
训练次数Epoch:14, 损失值Loss:0.219, 各参数数值w1:0.113, w2:0.697, b:0.807
训练次数Epoch:15, 损失值Loss:1.624, 各参数数值w1:-0.268, w2:0.633, b:0.825
训练次数Epoch:16, 损失值Loss:3.085, 各参数数值w1:-0.534, w2:0.682, b:0.926
训练次数Epoch:17, 损失值Loss:3.265, 各参数数值w1:-0.625, w2:0.828, b:1.081
训练次数Epoch:18, 损失值Loss:2.051, 各参数数值w1:-0.524, w2:1.028, b:1.245
训练次数Epoch:19, 损失值Loss:0.558, 各参数数值w1:-0.267, w2:1.231, b:1.377
训练次数Epoch:20, 损失值Loss:0.004, 各参数数值w1:0.074, w2:1.386, b:1.442
训练次数Epoch:21, 损失值Loss:0.603, 各参数数值w1:0.408, w2:1.456, b:1.427
训练次数Epoch:22, 损失值Loss:1.504, 各参数数值w1:0.650, w2:1.428, b:1.337
训练次数Epoch:23, 损失值Loss:1.706, 各参数数值w1:0.742, w2:1.315, b:1.199
训练次数Epoch:24, 损失值Loss:1.025, 各参数数值w1:0.669, w2:1.149, b:1.048
训练次数Epoch:25, 损失值Loss:0.192, 各参数数值w1:0.456, w2:0.977, b:0.923
训练次数Epoch:26, 损失值Loss:0.067, 各参数数值w1:0.163, w2:0.843, b:0.853
训练次数Epoch:27, 损失值Loss:0.797, 各参数数值w1:-0.131, w2:0.781, b:0.853
训练次数Epoch:28, 损失值Loss:1.710, 各参数数值w1:-0.351, w2:0.803, b:0.918
训练次数Epoch:29, 损失值Loss:1.977, 各参数数值w1:-0.445, w2:0.902, b:1.029
训练次数Epoch:30, 损失值Loss:1.388, 各参数数值w1:-0.395, w2:1.048, b:1.153
训练次数Epoch:31, 损失值Loss:0.485, 各参数数值w1:-0.220, w2:1.204, b:1.257
训练次数Epoch:32, 损失值Loss:0.009, 各参数数值w1:0.030, w2:1.330, b:1.316
训练次数Epoch:33, 损失值Loss:0.218, 各参数数值w1:0.287, w2:1.396, b:1.315
训练次数Epoch:34, 损失值Loss:0.721, 各参数数值w1:0.485, w2:1.389, b:1.256
训练次数Epoch:35, 损失值Loss:0.925, 各参数数值w1:0.577, w2:1.315, b:1.156
训练次数Epoch:36, 损失值Loss:0.623, 各参数数值w1:0.543, w2:1.196, b:1.041
训练次数Epoch:37, 损失值Loss:0.151, 各参数数值w1:0.399, w2:1.065, b:0.941
训练次数Epoch:38, 损失值Loss:0.016, 各参数数值w1:0.186, w2:0.958, b:0.879
训练次数Epoch:39, 损失值Loss:0.389, 各参数数值w1:-0.039, w2:0.901, b:0.868
训练次数Epoch:40, 损失值Loss:0.944, 各参数数值w1:-0.218, w2:0.907, b:0.909
训练次数Epoch:41, 损失值Loss:1.191, 各参数数值w1:-0.308, w2:0.972, b:0.986
训练次数Epoch:42, 损失值Loss:0.925, 各参数数值w1:-0.289, w2:1.079, b:1.079
训练次数Epoch:43, 损失值Loss:0.394, 各参数数值w1:-0.173, w2:1.197, b:1.161
训练次数Epoch:44, 损失值Loss:0.034, 各参数数值w1:0.008, w2:1.299, b:1.211
训练次数Epoch:45, 损失值Loss:0.064, 各参数数值w1:0.204, w2:1.358, b:1.218
训练次数Epoch:46, 损失值Loss:0.326, 各参数数值w1:0.364, w2:1.364, b:1.180
训练次数Epoch:47, 损失值Loss:0.482, 各参数数值w1:0.449, w2:1.316, b:1.108
训练次数Epoch:48, 损失值Loss:0.362, 各参数数值w1:0.441, w2:1.232, b:1.022
训练次数Epoch:49, 损失值Loss:0.107, 各参数数值w1:0.346, w2:1.134, b:0.942
训练次数Epoch:50, 损失值Loss:0.003, 各参数数值w1:0.192, w2:1.049, b:0.889
训练次数Epoch:51, 损失值Loss:0.189, 各参数数值w1:0.021, w2:0.999, b:0.873
训练次数Epoch:52, 损失值Loss:0.522, 各参数数值w1:-0.123, w2:0.995, b:0.896
训练次数Epoch:53, 损失值Loss:0.715, 各参数数值w1:-0.204, w2:1.038, b:0.950
训练次数Epoch:54, 损失值Loss:0.610, 各参数数值w1:-0.205, w2:1.114, b:1.018
训练次数Epoch:55, 损失值Loss:0.306, 各参数数值w1:-0.129, w2:1.204, b:1.081
训练次数Epoch:56, 损失值Loss:0.052, 各参数数值w1:0.000, w2:1.285, b:1.124
训练次数Epoch:57, 损失值Loss:0.011, 各参数数值w1:0.148, w2:1.337, b:1.134
训练次数Epoch:58, 损失值Loss:0.135, 各参数数值w1:0.276, w2:1.349, b:1.111
训练次数Epoch:59, 损失值Loss:0.239, 各参数数值w1:0.351, w2:1.321, b:1.060
训练次数Epoch:60, 损失值Loss:0.201, 各参数数值w1:0.358, w2:1.262, b:0.995
训练次数Epoch:61, 损失值Loss:0.069, 各参数数值w1:0.297, w2:1.189, b:0.933
训练次数Epoch:62, 损失值Loss:0.000, 各参数数值w1:0.187, w2:1.123, b:0.887
训练次数Epoch:63, 损失值Loss:0.093, 各参数数值w1:0.059, w2:1.080, b:0.869
训练次数Epoch:64, 损失值Loss:0.290, 各参数数值w1:-0.055, w2:1.071, b:0.881
训练次数Epoch:65, 损失值Loss:0.431, 各参数数值w1:-0.126, w2:1.098, b:0.917
训练次数Epoch:66, 损失值Loss:0.400, 各参数数值w1:-0.138, w2:1.152, b:0.966
训练次数Epoch:67, 损失值Loss:0.231, 各参数数值w1:-0.091, w2:1.220, b:1.015
训练次数Epoch:68, 损失值Loss:0.060, 各参数数值w1:0.001, w2:1.283, b:1.050
训练次数Epoch:69, 损失值Loss:0.000, 各参数数值w1:0.111, w2:1.328, b:1.062
训练次数Epoch:70, 损失值Loss:0.049, 各参数数值w1:0.212, w2:1.343, b:1.048
训练次数Epoch:71, 损失值Loss:0.111, 各参数数值w1:0.277, w2:1.328, b:1.012
训练次数Epoch:72, 损失值Loss:0.106, 各参数数值w1:0.292, w2:1.287, b:0.964
训练次数Epoch:73, 损失值Loss:0.041, 各参数数值w1:0.254, w2:1.234, b:0.915
训练次数Epoch:74, 损失值Loss:0.000, 各参数数值w1:0.177, w2:1.183, b:0.877
训练次数Epoch:75, 损失值Loss:0.047, 各参数数值w1:0.081, w2:1.147, b:0.859
训练次数Epoch:76, 损失值Loss:0.164, 各参数数值w1:-0.009, w2:1.137, b:0.863
训练次数Epoch:77, 损失值Loss:0.261, 各参数数值w1:-0.069, w2:1.153, b:0.886
训练次数Epoch:78, 损失值Loss:0.262, 各参数数值w1:-0.087, w2:1.191, b:0.922
训练次数Epoch:79, 损失值Loss:0.171, 各参数数值w1:-0.059, w2:1.241, b:0.958
训练次数Epoch:80, 损失值Loss:0.060, 各参数数值w1:0.005, w2:1.291, b:0.987
训练次数Epoch:81, 损失值Loss:0.003, 各参数数值w1:0.088, w2:1.328, b:0.998
训练次数Epoch:82, 损失值Loss:0.013, 各参数数值w1:0.166, w2:1.345, b:0.990
训练次数Epoch:83, 损失值Loss:0.047, 各参数数值w1:0.221, w2:1.337, b:0.965
训练次数Epoch:84, 损失值Loss:0.051, 各参数数值w1:0.239, w2:1.310, b:0.929
训练次数Epoch:85, 损失值Loss:0.022, 各参数数值w1:0.217, w2:1.272, b:0.891
训练次数Epoch:86, 损失值Loss:0.000, 各参数数值w1:0.164, w2:1.233, b:0.860
训练次数Epoch:87, 损失值Loss:0.025, 各参数数值w1:0.093, w2:1.204, b:0.843
训练次数Epoch:88, 损失值Loss:0.095, 各参数数值w1:0.023, w2:1.193, b:0.842
训练次数Epoch:89, 损失值Loss:0.160, 各参数数值w1:-0.027, w2:1.202, b:0.857
训练次数Epoch:90, 损失值Loss:0.173, 各参数数值w1:-0.047, w2:1.229, b:0.882
训练次数Epoch:91, 损失值Loss:0.125, 各参数数值w1:-0.032, w2:1.266, b:0.909
训练次数Epoch:92, 损失值Loss:0.055, 各参数数值w1:0.012, w2:1.305, b:0.931
训练次数Epoch:93, 损失值Loss:0.008, 各参数数值w1:0.073, w2:1.336, b:0.942
训练次数Epoch:94, 损失值Loss:0.002, 各参数数值w1:0.133, w2:1.352, b:0.938
训练次数Epoch:95, 损失值Loss:0.017, 各参数数值w1:0.178, w2:1.350, b:0.921
训练次数Epoch:96, 损失值Loss:0.022, 各参数数值w1:0.197, w2:1.332, b:0.894
训练次数Epoch:97, 损失值Loss:0.010, 各参数数值w1:0.186, w2:1.305, b:0.865
训练次数Epoch:98, 损失值Loss:0.000, 各参数数值w1:0.150, w2:1.276, b:0.839
训练次数Epoch:99, 损失值Loss:0.015, 各参数数值w1:0.097, w2:1.253, b:0.823
训练次数Epoch:100, 损失值Loss:0.056, 各参数数值w1:0.044, w2:1.242, b:0.820
......

......
训练次数Epoch:400, 损失值Loss:0.002, 各参数数值w1:0.030, w2:1.747, b:0.313
训练次数Epoch:401, 损失值Loss:0.002, 各参数数值w1:0.030, w2:1.748, b:0.312
训练次数Epoch:402, 损失值Loss:0.002, 各参数数值w1:0.030, w2:1.749, b:0.311
训练次数Epoch:403, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.750, b:0.310
训练次数Epoch:404, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.751, b:0.309
训练次数Epoch:405, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.751, b:0.308
训练次数Epoch:406, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.752, b:0.306
训练次数Epoch:407, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.753, b:0.305
训练次数Epoch:408, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.754, b:0.304
训练次数Epoch:409, 损失值Loss:0.002, 各参数数值w1:0.029, w2:1.755, b:0.303
训练次数Epoch:410, 损失值Loss:0.001, 各参数数值w1:0.029, w2:1.756, b:0.302
训练次数Epoch:411, 损失值Loss:0.001, 各参数数值w1:0.029, w2:1.756, b:0.301
训练次数Epoch:412, 损失值Loss:0.001, 各参数数值w1:0.029, w2:1.757, b:0.300
训练次数Epoch:413, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.758, b:0.299
训练次数Epoch:414, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.759, b:0.298
训练次数Epoch:415, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.760, b:0.297
训练次数Epoch:416, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.761, b:0.296
训练次数Epoch:417, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.761, b:0.295
训练次数Epoch:418, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.762, b:0.294
训练次数Epoch:419, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.763, b:0.293
训练次数Epoch:420, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.764, b:0.292
训练次数Epoch:421, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.765, b:0.291
训练次数Epoch:422, 损失值Loss:0.001, 各参数数值w1:0.028, w2:1.765, b:0.290
训练次数Epoch:423, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.766, b:0.289
训练次数Epoch:424, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.767, b:0.288
训练次数Epoch:425, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.768, b:0.287
训练次数Epoch:426, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.769, b:0.286
训练次数Epoch:427, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.769, b:0.285
训练次数Epoch:428, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.770, b:0.284
训练次数Epoch:429, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.771, b:0.283
训练次数Epoch:430, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.772, b:0.282
训练次数Epoch:431, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.772, b:0.282
训练次数Epoch:432, 损失值Loss:0.001, 各参数数值w1:0.027, w2:1.773, b:0.281
训练次数Epoch:433, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.774, b:0.280
训练次数Epoch:434, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.775, b:0.279
训练次数Epoch:435, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.776, b:0.278
训练次数Epoch:436, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.776, b:0.277
训练次数Epoch:437, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.777, b:0.276
训练次数Epoch:438, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.778, b:0.275
训练次数Epoch:439, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.779, b:0.274
训练次数Epoch:440, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.779, b:0.273
训练次数Epoch:441, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.780, b:0.272
训练次数Epoch:442, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.781, b:0.271
训练次数Epoch:443, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.782, b:0.270
训练次数Epoch:444, 损失值Loss:0.001, 各参数数值w1:0.026, w2:1.782, b:0.269
训练次数Epoch:445, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.783, b:0.268
训练次数Epoch:446, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.784, b:0.268
训练次数Epoch:447, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.784, b:0.267
训练次数Epoch:448, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.785, b:0.266
训练次数Epoch:449, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.786, b:0.265
训练次数Epoch:450, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.787, b:0.264
训练次数Epoch:451, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.787, b:0.263
训练次数Epoch:452, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.788, b:0.262
训练次数Epoch:453, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.789, b:0.261
训练次数Epoch:454, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.790, b:0.260
训练次数Epoch:455, 损失值Loss:0.001, 各参数数值w1:0.025, w2:1.790, b:0.259
训练次数Epoch:456, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.791, b:0.259
训练次数Epoch:457, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.792, b:0.258
训练次数Epoch:458, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.792, b:0.257
训练次数Epoch:459, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.793, b:0.256
训练次数Epoch:460, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.794, b:0.255
训练次数Epoch:461, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.794, b:0.254
训练次数Epoch:462, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.795, b:0.253
训练次数Epoch:463, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.796, b:0.253
训练次数Epoch:464, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.797, b:0.252
训练次数Epoch:465, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.797, b:0.251
训练次数Epoch:466, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.798, b:0.250
训练次数Epoch:467, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.799, b:0.249
训练次数Epoch:468, 损失值Loss:0.001, 各参数数值w1:0.024, w2:1.799, b:0.248
训练次数Epoch:469, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.800, b:0.247
训练次数Epoch:470, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.801, b:0.247
训练次数Epoch:471, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.801, b:0.246
训练次数Epoch:472, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.802, b:0.245
训练次数Epoch:473, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.803, b:0.244
训练次数Epoch:474, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.803, b:0.243
训练次数Epoch:475, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.804, b:0.242
训练次数Epoch:476, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.805, b:0.242
训练次数Epoch:477, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.805, b:0.241
训练次数Epoch:478, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.806, b:0.240
训练次数Epoch:479, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.807, b:0.239
训练次数Epoch:480, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.807, b:0.238
训练次数Epoch:481, 损失值Loss:0.001, 各参数数值w1:0.023, w2:1.808, b:0.238
训练次数Epoch:482, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.809, b:0.237
训练次数Epoch:483, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.809, b:0.236
训练次数Epoch:484, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.810, b:0.235
训练次数Epoch:485, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.811, b:0.234
训练次数Epoch:486, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.811, b:0.234
训练次数Epoch:487, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.812, b:0.233
训练次数Epoch:488, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.813, b:0.232
训练次数Epoch:489, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.813, b:0.231
训练次数Epoch:490, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.814, b:0.230
训练次数Epoch:491, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.814, b:0.230
训练次数Epoch:492, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.815, b:0.229
训练次数Epoch:493, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.816, b:0.228
训练次数Epoch:494, 损失值Loss:0.001, 各参数数值w1:0.022, w2:1.816, b:0.227
训练次数Epoch:495, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.817, b:0.227
训练次数Epoch:496, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.818, b:0.226
训练次数Epoch:497, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.818, b:0.225
训练次数Epoch:498, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.819, b:0.224
训练次数Epoch:499, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.819, b:0.223
训练次数Epoch:500, 损失值Loss:0.001, 各参数数值w1:0.021, w2:1.820, b:0.223


训练完毕,权重w1更新为: 0.021,权重w2更新为: 1.820,偏置b更新为: 0.223,此时y的预测值为 7.840545654296875

可以看出,随着训练次数的增加,损失值已经越来越小,近乎可以认定是收敛了。如果继续增加训练次数,应该能获得更好的效果,但时间成本与内存成本已经变得很高了。

需要注意的是:第500次训练结束的时候,训练集样本中的损失值Loss已经是接近于0了,但是拿测试集x=4.0进行测试的时候,会发现模型预测的输出值是7.8405,这与实际的标签值8.0还是差了一点。在本次训练中,我们没有采用优化器进行优化训练操作,可能造成了过拟合的现象。

四、注意事项

1、张量.item()的使用要求

在PyTorch中,张量是多维数组的表示,可以包含整数、浮点数等不同类型的数据。如果张量只包含一个元素,我们可以使用.item()方法来访问该元素的数值,将其转换为Python标量(scalar)。

        例如,考虑以下代码:
        import torch

        x = torch.tensor([5.0])  # 创建一个包含一个元素的张量
        value = x.item()  # 获取张量中的数值
        print(value)  # 输出: 5.0
在上述代码中,我们创建了一个张量 x,其中包含一个元素 5.0。通过调用 x.item(),我们可以将张量中的数值提取出来,并将其存储在变量 value 中。最后,我们打印 value 的值,得到输出结果为 5.0。需要注意的是:张量.item() 方法仅适用于包含单个元素的张量。对于包含多个元素的张量,它将引发异常。

2、张量.data()的使用要求

张量.data 是指 PyTorch 中张量对象的 .data 属性。在 PyTorch 中,张量对象可以通过 .data 属性获得其底层数据的引用。

具体来说,.data 属性返回一个指向张量数据的新的张量对象,该对象与原始张量共享相同的底层数据存储空间。这意味着对于两个张量 x 和 y,如果你使用 y = x.data,那么 y 将是 x 的一个视图,对 y 的操作将反映在 x 上,反之亦然。

.data 的主要作用是允许用户直接访问底层数据,并且不进行自动微分。这可以在某些情况下非常有用,例如当你想要获取或修改张量的值,但又不希望对这些操作产生梯度。

需要注意的是,使用 .data 带来的引用共享也可能导致一些问题。由于 .data 与原始张量共享内存,所以在计算图中可能会出现问题,尤其是在反向传播时。因此,在大多数情况下,建议使用 .detach() 方法来切断与原始张量之间的梯度连接,而不是直接使用 .data。
总之,.data 是 PyTorch 中张量对象的属性,提供了对底层数据的直接访问,并与原始张量共享内存。
它的使用需要注意梯度计算和计算图的影响,推荐使用 .detach() 方法代替。

如果你想使用 .detach() 方法来修改 w.data = w.data - alpha * w.grad.data,可以这样改写:
    w.data = w.data.detach() - alpha * w.grad.data

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值