stx_tensorflow 学习之路(二)之构造线性回归模型

线性回归模型

创建线性回归模型我们首先需要导入用到包

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

然后需要随机生成1000个店围绕在y = 0.1x + 0.3的直线周围(当然你也可以选则别的直线都是可以的)

运用到numpy的numpy.random.normal(loc=0.0, scale=1.0, size=None)

loc:float
    此概率分布的均值(对应着整个分布的中心centre)
scale:float
    此概率分布的标准差(对应于分布的宽度,scale越大越矮胖,scale越小,越瘦高)
size:int or tuple of ints
    输出的shape,默认为None,只输出一个值

num_points = 1000
vectors_set = []
for i in range(num_points):
    #范围在0周围标准差为0.55的店
    x1 = np.random.normal(0.0, 0.55)
    #给值在加上0周围标准差为0.03的随机值
    y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
    #把每一次的值都放入列表
    vectors_set.append([x1, y1])

然后取出x,y的样本值[x,y]所以位置0为x位置1为y

x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
#创建plt画布声明横坐标的值与纵坐标的值,c='r'是color颜色的缩写这里声明的是红色
plt.scatter(x_data,y_data,c='r')
#图像展示画出plt图片
plt.show()

进行模型搭建

# 生成1维的W矩阵,取值是[-1,1]之间的随机数
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')
# 生成1维的b矩阵,初始值是0
b = tf.Variable(tf.zeros([1]), name='b')
# 经过计算得出预估值y
y = W * x_data + b

# 以预估值y和实际值y_data之间的均方误差作为损失
loss = tf.reduce_mean(tf.square(y - y_data), name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 训练的过程就是最小化这个误差值
train = optimizer.minimize(loss, name='train')
#开启Session
sess = tf.Session()
#初始化变量
init = tf.global_variables_initializer()
sess.run(init)

# 初始化的W和b是多少
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
# 执行20次训练
for step in range(20):
    sess.run(train)
    # 输出训练好的W和b
    print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))
writer = tf.train.SummaryWriter("./tmp", sess.graph)
#W = [ 0.96539688] b = [ 0.] loss = 0.297884
#W = [ 0.71998411] b = [ 0.28193575] loss = 0.112606
#W = [ 0.54009342] b = [ 0.28695393] loss = 0.0572231
#W = [ 0.41235447] b = [ 0.29063231] loss = 0.0292957
#W = [ 0.32164571] b = [ 0.2932443] loss = 0.0152131
#W = [ 0.25723246] b = [ 0.29509908] loss = 0.00811188
#W = [ 0.21149193] b = [ 0.29641619] loss = 0.00453103
#W = [ 0.17901111] b = [ 0.29735151] loss = 0.00272536
#W = [ 0.15594614] b = [ 0.29801565] loss = 0.00181483
#W = [ 0.13956745] b = [ 0.29848731] loss = 0.0013557
#W = [ 0.12793678] b = [ 0.29882219] loss = 0.00112418
#W = [ 0.11967772] b = [ 0.29906002] loss = 0.00100743
#W = [ 0.11381286] b = [ 0.29922891] loss = 0.000948558
#W = [ 0.10964818] b = [ 0.29934883] loss = 0.000918872
#W = [ 0.10669079] b = [ 0.29943398] loss = 0.000903903
#W = [ 0.10459071] b = [ 0.29949448] loss = 0.000896354
#W = [ 0.10309943] b = [ 0.29953739] loss = 0.000892548
#W = [ 0.10204045] b = [ 0.29956791] loss = 0.000890629
#W = [ 0.10128847] b = [ 0.29958954] loss = 0.000889661
#W = [ 0.10075447] b = [ 0.29960492] loss = 0.000889173
#W = [ 0.10037527] b = [ 0.29961586] loss = 0.000888927

进行plt绘图

plt.scatter(x_data,y_data,c='r')
#带入x,y画出直线
plt.plot(x_data,sess.run(W)*x_data+sess.run(b))
plt.show()

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值