Tensorflow学习(一)—线性回归

线性回归表达式
f ( x i ) = w x i + b f(x_i)=wx_i+b f(xi)=wxi+b
定义线性回归的损失函数
l o s s = 1 n − 1 ∑ i = 1 n ( y ^ − y ) 2 loss=\frac{1}{n-1}\sum^n_{i=1}(\hat{y}-y)^2 loss=n11i=1n(y^y)2
通过梯度下降法,不断的去更新权重 w w w , b b b 来最小化损失函数。

导入相关包和初始化学习率、迭代次数和迭代结果展示间隔

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
rng = np.random

learning_rate=0.01
training_epochs=5000
display_step=50

导入训练数据并建立线性回归模型

# 训练数据
train_X = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167,
                      7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1])
train_Y = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221,
                      2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])
n_samples = train_X.shape[0]
print(n_samples)

# tf Graph Input(占位)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

# 设置模型参数
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")

# 建立线性回归模型
pred = tf.add(tf.multiply(X, W), b)

# 将均方误差设置为损失函数
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)

# 设置优化算法为梯度下降,优化参数
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

进行训练,优化参数并打印结果

# 开始训练
with tf.Session() as sess:

    # 执行初始化操作
    sess.run(init)

    # 拟合模型数据
    for epoch in range(training_epochs):
        for (x, y) in zip(train_X, train_Y):
            sess.run(optimizer, feed_dict={X: x, Y: y})

        # 每50次迭代后在控制台输出模型当前训练的loss以及权重大小
        if (epoch+1) % display_step == 0:
            c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
                "W=", sess.run(W), "b=", sess.run(b))

    print("Optimization Finished!")
    training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
    print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

测试拟合

    # 画出拟合图像
    plt.plot(train_X, train_Y, 'ro', label='Original data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.show()

	# 创建测试数据
    test_X = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
    test_Y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])

    print("Testing... (Mean square loss Comparison)")
    testing_cost = sess.run(
        tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),
        feed_dict={X: test_X, Y: test_Y})  # same function as cost above
    print("Testing cost=", testing_cost)
    print("Absolute mean square loss difference:", abs(
        training_cost - testing_cost))

    plt.plot(test_X, test_Y, 'bo', label='Testing data')
    plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
    plt.legend()
    plt.show()

结果:

Epoch: 0050 cost= 0.078804322 W= 0.2260428 b= 0.97088903
Epoch: 0100 cost= 0.078596957 W= 0.22744364 b= 0.9608115
Epoch: 0150 cost= 0.078413419 W= 0.22876108 b= 0.9513346
Epoch: 0200 cost= 0.078250982 W= 0.22999991 b= 0.94242185
Epoch: 0250 cost= 0.078107201 W= 0.2311651 b= 0.93403995
Epoch: 0300 cost= 0.077979900 W= 0.23226096 b= 0.9261564
......
Epoch: 4100 cost= 0.076991096 W= 0.24943087 b= 0.80263716
Epoch: 4150 cost= 0.076991081 W= 0.24944021 b= 0.8025703
Epoch: 4200 cost= 0.076991044 W= 0.24944943 b= 0.80250335
Epoch: 4250 cost= 0.076991022 W= 0.24945791 b= 0.8024428
Epoch: 4300 cost= 0.076991007 W= 0.24946594 b= 0.8023854
Epoch: 4350 cost= 0.076990999 W= 0.24947393 b= 0.8023277
Epoch: 4400 cost= 0.076990969 W= 0.24948138 b= 0.8022737
Epoch: 4450 cost= 0.076990969 W= 0.24948774 b= 0.80222714
Epoch: 4500 cost= 0.076990947 W= 0.24949394 b= 0.80218345
Epoch: 4550 cost= 0.076990917 W= 0.24950035 b= 0.8021374
Epoch: 4600 cost= 0.076990925 W= 0.24950641 b= 0.80209374
Epoch: 4650 cost= 0.076990902 W= 0.2495118 b= 0.80205524
Epoch: 4700 cost= 0.076990910 W= 0.24951638 b= 0.8020225
Epoch: 4750 cost= 0.076990895 W= 0.24952067 b= 0.80199033
Epoch: 4800 cost= 0.076990880 W= 0.249525 b= 0.80196005
Epoch: 4850 cost= 0.076990880 W= 0.24952912 b= 0.80193025
Epoch: 4900 cost= 0.076990873 W= 0.2495326 b= 0.8019052
Epoch: 4950 cost= 0.076990873 W= 0.2495356 b= 0.8018836
Epoch: 5000 cost= 0.076990858 W= 0.24953851 b= 0.8018627
Optimization Finished!
Training cost= 0.07699086 W= 0.24953851 b= 0.8018627 

在这里插入图片描述

Testing... (Mean square loss Comparison)
Testing cost= 0.07912248
Absolute mean square loss difference: 0.0021316186

在这里插入图片描述

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值