tensorflow线性回归问题 y=2x+1

人工数据集生成

导入相关库

import tensorflow as tf #查看tensorflow版本
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline
print("Tensorflow版本是:",tf.__version__)

生成数据集

首先,生成输入数据。

我们需要构造满足这个函数的x和y同时加入一些不满足方程的噪声

x_data = np.linspace(-1,1,100)

np.random.seed(5) #设置随机数种子
#y=2x+1+噪声,其中,噪声的维度与x_data一致
y_data=2 * x_data + 1.0 + np.random.randn(*x_data.shape) * 0.4

np.random.randn(10)

 

 

x_data.shape

 

 

np.random.randn(*x_data.shape)

 

利用matplotlib画出生成结果

plt.scatter(x_data,y_data)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Figure:Training Data")

 

#画出随机生成数据的散点图
plt.scatter(x_data,y_data)

#画出我们想要通过学习得到的目标现行函数 y=2x+1
plt.plot(x_data,1.0 + 2 * x_data,'r',linewidth=3)

 

 构建回归模型

#通过模型执行,将实现前向计算(预测计算)

def model(x,w,b):
     return tf.multiply(x,w) + b

 创建优化变量

tensorflow变量的声明函数是tf.Variable

tf.Variable的作用是保存和更新参数

变量的初始值可以是随机数、常数,厚实通过其他变量的初始值计算得到

#构建模型中的变量w,对应线性函数的斜率
w = tf.Variable(np.random.randn(),tf.float32)

#构建模型中的变量b,对应线性函数的截距
b = tf.Variable(0.0,tf.float32)

 定义损失函数

损失函数用于描述预测值与真实值之间的误差,从而指导模型收敛的方向

常见损失函数:均方差和交叉熵

def loss(x,y,w,b):
    err = model(x,w,b) - y #计算模型预测值和标签值的差异
    squared_err = tf.square(err) #求平方,得出方差
    return tf.reduce_mean(squared_err)  #求均值,得出均方差

 设置训练超参数

training_epochs = 10 #迭代次数(训练轮数)
learning_rate = 0.01 #学习率

定义计算梯度函数

def grad(x,y,w,b):
    with tf.GradientTape() as tape:
        loss_ = loss(x, y, w, b)
    return tape.gradient(loss_, [w,b])  #返回梯度向量

执行训练

step = 0 #记录训练步数
loss_list = []  #用于保存loss值得列表
display_step = 10   #控制训练过程数据显示的频率,不是超参数

for epoch in range(training_epochs):
    for xs,ys in zip(x_data,y_data):
        
        loss_ = loss(xs, ys, w, b) #计算损失
        loss_list.append(loss_) #保存本次损失计算的结果
        
        delta_w,delta_b = grad(xs, ys, w, b)  #计算该当前[w,b]点的梯度
        change_w = delta_w * learning_rate  #计算w需要调整的量
        change_b = delta_b * learning_rate
        w.assign_sub(change_w) #变量w值变更为减去change_w后的值
        b.assign_sub(change_b)
      
        step=step+1 #训练步数+1
        if step % display_step == 0:  #显示训练过程信息
            print("Training Epoch:",'%02d' % (epoch+1), "Step: %03d" % (step),"loss=%.6f" % (loss_))
    plt.plot(x_data, w.numpy() * x_data + b.numpy())   #完成一轮训练后,画出回归的线条

训练输出

模型训练阶段,设置迭代轮次,每次通过将样本逐个输入模型,进行梯度下降优化操作

Training Epoch: 01 Step: 010 loss=1.336950
Training Epoch: 01 Step: 020 loss=0.000148
Training Epoch: 01 Step: 030 loss=0.067854
Training Epoch: 01 Step: 040 loss=1.815440
Training Epoch: 01 Step: 050 loss=1.042153
Training Epoch: 01 Step: 060 loss=1.945168
Training Epoch: 01 Step: 070 loss=3.608630
Training Epoch: 01 Step: 080 loss=2.568610
Training Epoch: 01 Step: 090 loss=1.824860
Training Epoch: 01 Step: 100 loss=2.897502
Training Epoch: 02 Step: 110 loss=2.572464
Training Epoch: 02 Step: 120 loss=0.282773
Training Epoch: 02 Step: 130 loss=0.153508
Training Epoch: 02 Step: 140 loss=0.337079
Training Epoch: 02 Step: 150 loss=0.020678
Training Epoch: 02 Step: 160 loss=0.182118
Training Epoch: 02 Step: 170 loss=0.771482
Training Epoch: 02 Step: 180 loss=0.339285
Training Epoch: 02 Step: 190 loss=0.156584
Training Epoch: 02 Step: 200 loss=0.760717
Training Epoch: 03 Step: 210 loss=1.060150
Training Epoch: 03 Step: 220 loss=0.056784
Training Epoch: 03 Step: 230 loss=0.083184
Training Epoch: 03 Step: 240 loss=0.291806
Training Epoch: 03 Step: 250 loss=0.000171
Training Epoch: 03 Step: 260 loss=0.030629
Training Epoch: 03 Step: 270 loss=0.308218
Training Epoch: 03 Step: 280 loss=0.046899
Training Epoch: 03 Step: 290 loss=0.000413
Training Epoch: 03 Step: 300 loss=0.271245
Training Epoch: 04 Step: 310 loss=0.401460
Training Epoch: 04 Step: 320 loss=0.000228
Training Epoch: 04 Step: 330 loss=0.032187
Training Epoch: 04 Step: 340 loss=0.319214
Training Epoch: 04 Step: 350 loss=0.003063
Training Epoch: 04 Step: 360 loss=0.006115
Training Epoch: 04 Step: 370 loss=0.173298
Training Epoch: 04 Step: 380 loss=0.002532
Training Epoch: 04 Step: 390 loss=0.024414
Training Epoch: 04 Step: 400 loss=0.123415
Training Epoch: 05 Step: 410 loss=0.175341
Training Epoch: 05 Step: 420 loss=0.011656
Training Epoch: 05 Step: 430 loss=0.013609
Training Epoch: 05 Step: 440 loss=0.340452
Training Epoch: 05 Step: 450 loss=0.005179
Training Epoch: 05 Step: 460 loss=0.001090
Training Epoch: 05 Step: 470 loss=0.121781
Training Epoch: 05 Step: 480 loss=0.001001
Training Epoch: 05 Step: 490 loss=0.059624
Training Epoch: 05 Step: 500 loss=0.070916
Training Epoch: 06 Step: 510 loss=0.094695
Training Epoch: 06 Step: 520 loss=0.029513
Training Epoch: 06 Step: 530 loss=0.007022
Training Epoch: 06 Step: 540 loss=0.352419
Training Epoch: 06 Step: 550 loss=0.006367
Training Epoch: 06 Step: 560 loss=0.000112
Training Epoch: 06 Step: 570 loss=0.099330
Training Epoch: 06 Step: 580 loss=0.005327
Training Epoch: 06 Step: 590 loss=0.083320
Training Epoch: 06 Step: 600 loss=0.049832
Training Epoch: 07 Step: 610 loss=0.063026
Training Epoch: 07 Step: 620 loss=0.041784
Training Epoch: 07 Step: 630 loss=0.004484
Training Epoch: 07 Step: 640 loss=0.358702
Training Epoch: 07 Step: 650 loss=0.007004
Training Epoch: 07 Step: 660 loss=0.000001
Training Epoch: 07 Step: 670 loss=0.088823
Training Epoch: 07 Step: 680 loss=0.008829
Training Epoch: 07 Step: 690 loss=0.096867
Training Epoch: 07 Step: 700 loss=0.040542
Training Epoch: 08 Step: 710 loss=0.049373
Training Epoch: 08 Step: 720 loss=0.048849
Training Epoch: 08 Step: 730 loss=0.003409
Training Epoch: 08 Step: 740 loss=0.361934
Training Epoch: 08 Step: 750 loss=0.007338
Training Epoch: 08 Step: 760 loss=0.000043
Training Epoch: 08 Step: 770 loss=0.083710
Training Epoch: 08 Step: 780 loss=0.010947
Training Epoch: 08 Step: 790 loss=0.104144
Training Epoch: 08 Step: 800 loss=0.036186
Training Epoch: 09 Step: 810 loss=0.043067
Training Epoch: 09 Step: 820 loss=0.052654
Training Epoch: 09 Step: 830 loss=0.002919
Training Epoch: 09 Step: 840 loss=0.363585
Training Epoch: 09 Step: 850 loss=0.007510
Training Epoch: 09 Step: 860 loss=0.000090
Training Epoch: 09 Step: 870 loss=0.081169
Training Epoch: 09 Step: 880 loss=0.012110
Training Epoch: 09 Step: 890 loss=0.107945
Training Epoch: 09 Step: 900 loss=0.034066
Training Epoch: 10 Step: 910 loss=0.040026
Training Epoch: 10 Step: 920 loss=0.054645
Training Epoch: 10 Step: 930 loss=0.002684
Training Epoch: 10 Step: 940 loss=0.364426
Training Epoch: 10 Step: 950 loss=0.007598
Training Epoch: 10 Step: 960 loss=0.000120
Training Epoch: 10 Step: 970 loss=0.079892
Training Epoch: 10 Step: 980 loss=0.012725
Training Epoch: 10 Step: 990 loss=0.109905
Training Epoch: 10 Step: 1000 loss=0.033013

迭代训练结果图形

 显示训练结果

print("w:",w.numpy())
print("b:",b.numpy())

 结果可视化

plt.scatter(x_data,y_data,label='Original data')
plt.plot(x_data,x_data * 2.0+1.0,label='Object line',color='g',linewidth=3)
plt.plot(x_data,x_data * w.numpy() + b.numpy(),label='Fitted line',color='r',linewidth=3)
plt.legend(loc=2)

 查看损失变化情况

plt.plot(loss_list)

 

 

plt.plot(loss_list,'r+')

 

 进行预测

x_test=3.21

predict = model(x_test,w.numpy(),b.numpy())
print("预测值: %f" % predict)

target = 2 * x_test + 1.0
print("目标值: %f" % target)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值