TensorFlow实现线性回归

该博客介绍了如何使用TensorFlow进行线性回归模型的训练。首先,加载波士顿房价数据集并对其进行归一化处理。然后,通过梯度下降法更新参数,逐步减小损失函数,训练过程中每1000次迭代打印损失值。最终,展示了训练过程中的损失变化趋势,并对测试数据进行预测,可视化了预测结果。
摘要由CSDN通过智能技术生成

导入相关的包和加载数据

import tensorflow as tf
import matplotlib.pylab as plt
import numpy as np

# 调用数据
boston_house = tf.keras.datasets.boston_housing
# load_data加载数据,返回训练集/测试集两个元素
(train_x, train_y), (test_x, test_y) = boston_house.load_data(test_split=0.1)

输入数据归一化

train_x = (train_x - train_x.min(axis=0)) / (train_x.max(axis=0) - train_x.min(axis=0))
test_x = (test_x - test_x.min(axis=0)) / (test_x.max(axis=0) - test_x.min(axis=0))

train_x = tf.cast(train_x, tf.float32)
test_x = tf.cast(test_x, tf.float32)

输出数据转换为列向量

train_y = train_y.reshape(-1, 1)
test_y = test_y.reshape(-1, 1)

初始化参数

w = tf.Variable(tf.random.truncated_normal([13, 1], stddev=0.01, seed=1))
b = tf.Variable(tf.random.truncated_normal([1, 1], stddev=0.01, seed=1))
loss_list = []
lr = 0.01  # 学习率

for i in range(50000):
    with tf.GradientTape() as tape:
        pred = tf.matmul(train_x, w) + b
        loss = tf.reduce_mean(tf.square(train_y - pred))
    loss_list.append(loss)
    grad = tape.gradient(loss, [w, b])

    w.assign_sub(lr * grad[0]) # 参数w梯度下降更新
    b.assign_sub(lr * grad[1])# 参数b梯度下降更新

    if i % 1000 == 0:
        print('i: %d, loss:%f' % (i, loss))

pre_test_y = tf.matmul(test_x, w) + b
pre_test_y = pre_test_y.numpy() # 转数组

可视化

# 可视化训练过程
plt.figure("train_loss")
plt.title("train_loss")
plt.plot(loss_list, label='train_loss')
plt.legend()
plt.show()

# 可视化预测结果
plt.figure('predict')
plt.title("predict")
plt.xlabel("ground truth")
plt.ylabel("infer result")
x = np.arange(1, 50)
y = x
plt.plot(x, y)
plt.scatter(test_y, pre_test_y, color="green", label="Test")
plt.grid()
plt.legend()
plt.show()

迭代过程中轮次与Loss的数据如下:

i: 0, loss:584.958679
i: 1000, loss:33.152821
i: 2000, loss:26.689535
i: 3000, loss:24.490910
i: 4000, loss:23.541544
i: 5000, loss:23.036236
i: 6000, loss:22.716373
i: 7000, loss:22.488619
i: 8000, loss:22.315008
i: 9000, loss:22.177662
i: 10000, loss:22.066746
i: 11000, loss:21.976049
i: 12000, loss:21.901287
i: 13000, loss:21.839272
i: 14000, loss:21.787601
i: 15000, loss:21.744373
i: 16000, loss:21.708096
i: 17000, loss:21.677565
i: 18000, loss:21.651814
i: 19000, loss:21.630049
i: 20000, loss:21.611616
i: 21000, loss:21.595987
i: 22000, loss:21.582718
i: 23000, loss:21.571444
i: 24000, loss:21.561855
i: 25000, loss:21.553692
i: 26000, loss:21.546738
i: 27000, loss:21.540808
i: 28000, loss:21.535749
i: 29000, loss:21.531433
i: 30000, loss:21.527750
i: 31000, loss:21.524603
i: 32000, loss:21.521917
i: 33000, loss:21.519627
i: 34000, loss:21.517666
i: 35000, loss:21.515989
i: 36000, loss:21.514557
i: 37000, loss:21.513329
i: 38000, loss:21.512283
i: 39000, loss:21.511389
i: 40000, loss:21.510618
i: 41000, loss:21.509964
i: 42000, loss:21.509403
i: 43000, loss:21.508924
i: 44000, loss:21.508514
i: 45000, loss:21.508160
i: 46000, loss:21.507860
i: 47000, loss:21.507603
i: 48000, loss:21.507381
i: 49000, loss:21.507193
Process finished with exit code 0

可视化结果如下:
在这里插入图片描述

在这里插入图片描述

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值