环境:Python 3.7
TensorFlow 1.12
numpy 1.15.4
例程:
import tensorflow as tf
import numpy as np
# 随机生成300个随机点
x_data = np.random.rand(300)
y_data = x_data*1.23 + 22
# 构建线性模型
b = tf.Variable(0.)
k = tf.Variable(0.)
y = k * x_data + b
# 损失函数
loss = tf.reduce_mean(tf.square(y_data - y))
# 定义一个梯度下降进行训练
optimizer = tf.train.GradientDescentOptimizer(0.25)
# 最小代价函数
train = optimizer.minimize(loss)
# 初始化变量
init = tf.global_variables_initializer()
# 开始训练
with tf.Session() as sees:
sees.run(init)
for step in range(501):
sees.run(train)
if step%50 == 0:
print(step, sees.run([k, b]))
输出:
0 [5.5488153, 11.299202]
50 [2.644671, 21.262291]
100 [1.4871498, 21.865904]
150 [1.2767426, 21.975624]
200 [1.2384955, 21.99557]
250 [1.231544, 21.999195]
300 [1.2302806, 21.999853]
350 [1.2300503, 21.999973]
400 [1.2300122, 21.999992]
450 [1.2300119, 21.999992]
500 [1.2300119, 21.999992]