Tensorflow 构造线性回归模型

一个简单的使用Tensorflow 实现的线性回归模型demo

import tensorflow as tf
import matplotlib.pyplot as plt
num_points = 1000
vectors_set = []
for i in range(num_points):
        x1 = np.random.normal(0.0,0.55)
        y1 = x1* 0.1 + 0.3 + np.random.normal(0.0,0.03)
        vectors_set.append([x1,y1])
##生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
plt.scatter(x_data,y_data,c='r')
plt.show()

在这里插入图片描述

#生成1维的W矩阵,取值是【-1,1]之间的随机数
W = tf.Variable(tf.random_uniform([1],-1.0,1.0),name='W')
#b 全0
b = tf.Variable(tf.zeros([1]),name='b')
y = W*x_data +b
#优化,用梯度下降不断去拟合W 和 b
#以预估值Y和实际值y_data之间的均方差作为损失
#tf.square 是平方项,计算出真实值和预算值之间的差异项
#tf.reuce_mean 计算均值
loss = tf.reduce_mean(tf.square(y-y_data),name='loss') 
#采用梯度下降法来优化参数 函数:train.GradientDescentOptimizer  0.5 就是指定的学习率 可以随机指定
optimizer = tf.train.GradientDescentOptimizer(0.5)
#训练的过程就是最小化这个误差值,让上面构造出来的优化器去最小化loss 值,
train = optimizer.minimize(loss,name='train')
#完成上面的定义之后,我们需要定义一个session,通过session 来实际的干这些事情
sess = tf.Session()
#全局变量的初始化
init = tf.global_variables_initializer()
sess.run(init)
#初始化的w和b 是多少
print ("W =", sess.run(W),"b = ", sess.run(b),'loss = ',sess.run(loss))
#执行20次训练
for step in range(20):
    sess.run(train)
    #输出训练好的W 和b
    print ("W =", sess.run(W),"b = ", sess.run(b),'loss = ',sess.run(loss))

输出内容:

W = [0.24886155] b =  [0.] loss =  0.097556226
W = [0.20717728] b =  [0.30054685] loss =  0.0041697198
W = [0.17645806] b =  [0.30068] loss =  0.0025523046
W = [0.15452673] b =  [0.30077815] loss =  0.0017279172
W = [0.13886932] b =  [0.30084822] loss =  0.0013077314
W = [0.12769103] b =  [0.30089822] loss =  0.0010935644
W = [0.11971053] b =  [0.30093396] loss =  0.0009844047
W = [0.11401301] b =  [0.30095944] loss =  0.0009287667
W = [0.10994539] b =  [0.30097765] loss =  0.00090040814
W = [0.10704139] b =  [0.30099064] loss =  0.000885954
W = [0.10496815] b =  [0.3009999] loss =  0.0008785868
W = [0.103488] b =  [0.30100656] loss =  0.00087483175
W = [0.10243127] b =  [0.30101126] loss =  0.0008729178
W = [0.10167685] b =  [0.30101466] loss =  0.0008719423
W = [0.10113824] b =  [0.30101708] loss =  0.0008714451
W = [0.10075372] b =  [0.30101877] loss =  0.00087119173
W = [0.10047919] b =  [0.30102] loss =  0.0008710625
W = [0.1002832] b =  [0.3010209] loss =  0.00087099674
W = [0.10014328] b =  [0.30102152] loss =  0.00087096303
W = [0.10004338] b =  [0.30102196] loss =  0.000870946
W = [0.09997206] b =  [0.3010223] loss =  0.00087093737

构图:

##w 越来越趋于0.1   b 趋近于 0.3  loss 越来越趋于0
##构图
plt.scatter(x_data,y_data,c='r')
plt.plot(x_data,sess.run(W)*x_data+sess.run(b))
plt.show()

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值