学习率learn_rate:每次参数更新的幅度
参数的更新遵从下面的公式
#设损失函数loss=(w+1)^2,令w的初值为5.反向传播求最优w,即最小loss对应的w
import tensorflow as tf
#定义初始w为5
w = tf.Variable(tf.constant(5,dtype=tf.float32))
#定义损失函数为loss
# b = tf.constant(1,dtype=tf.float32)
new_val=tf.add(w,1)
loss = tf.square(new_val)
#定义反向传播方法
train_step=tf.train.GradientDescentOptimizer(0.2).minimize(loss)
#生成会话,训练40轮
with tf.Session() as sess:
init_op = tf.global_variables_initializer() # 实现对所有参数的初始化
sess.run(init_op)
for i in range(40):
sess.run(train_step)
w_val = sess.run(w)
loss_val = sess.run(loss)
print("经过第%s轮,参数w是%f,损失函数是%f" %(i,w_val,loss_val))
运行结果如下
当把学习率定为1,loss并没有下降,w在5和-7之间跳动,并不收敛
当把学习率定为0.001,,loss缓慢下降,w也缓慢下降,但是不能找到最小的loss
#coding:utf-8
#设损失函数 loss=(w+1)^2, 令w初值是常数10。反向传播就是求最优w,即求最小loss对应的w值
#使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得更有收敛度。
import tensorflow as tf
LEARNING_RATE_BASE = 0.1 #最初学习率
LEARNING_RATE_DECAY = 0.99 #学习率衰减率
LEARNING_RATE_STEP = 1 #喂入多少轮BATCH_SIZE后,更新一次学习率,一般设为:总样本数/BATCH_SIZE
#运行了几轮BATCH_SIZE的计数器,初值给0, 设为不被训练
global_step = tf.Variable(0, trainable=False)
#定义指数下降学习率
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, LEARNING_RATE_STEP, LEARNING_RATE_DECAY, staircase=True)
#定义待优化参数,初值给10
w = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数loss
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#生成会话,训练40轮
with tf.Session() as sess:
init_op=tf.global_variables_initializer()
sess.run(init_op)
for i in range(40):
sess.run(train_step)
learning_rate_val = sess.run(learning_rate)
global_step_val = sess.run(global_step)
w_val = sess.run(w)
loss_val = sess.run(loss)
print ("After %s steps: global_step is %f, w is %f, learning rate is %f, loss is %f" % (i, global_step_val, w_val, learning_rate_val, loss_val))
结果如下