tf.train.noisy_linear_cosine_decay

tensorflow在1.5版本即将推出

noisy_linear_cosine_decay(
    learning_rate,
    global_step,
    decay_steps,
    initial_variance=1.0,
    variance_decay=0.55,
    num_periods=0.5,
    alpha=0.0,
    beta=0.001,
    name=None
)

 

这是计算公式

 

global_step = min(global_step, decay_steps)
linear_decay = (decay_steps - global_step) / decay_steps)
cosine_decay = 0.5 * (
    1 + cos(pi * 2 * num_periods * global_step / decay_steps))
decayed = (alpha + linear_decay + eps_t) * cosine_decay + beta
decayed_learning_rate = learning_rate * decayed
where eps_t is 0-centered gaussian noise with variance initial_variance / (1 + global_step) ** variance_decay

 

以下代码利用tensorboard快速预览learning rate样式,其中实现的是断崖式的效果。

 

import tensorflow as tf
import os

folder_summary = "learning_rate/11"
if not os.path.exists(folder_summary):
        os.makedirs(folder_summary)

global_step = tf.Variable(0, trainable=False)
initial = tf.Variable(0.001,trainable=False)
decay_rate = 0.7

#initial_learning_rate = tf.placeholder(tf.float32) #初始学习率
learning_rate = tf.train.noisy_linear_cosine_decay(initial,
                                                    global_step, 
                                                    decay_steps = 10000,
                                                    initial_variance=0.1,
                                                    variance_decay=0.35,
                                                    num_periods=0.5,
                                                    alpha=0.0,
                                                    beta=0.1)
#num_periods cos曲线出现的次数
#alpha 所有值整体往上移动
#beta 收敛到的最低值

opt = tf.train.AdamOptimizer(learning_rate,epsilon=0.0001)

_ = tf.summary.scalar('learning_rate', tensor=learning_rate, collections=['train'])

add_global = global_step.assign_add(1)
reset_global = global_step.assign(0)
initial_decay =initial.assign(initial*decay_rate)

with tf.Session() as sess:
    tf.global_variables_initializer().run()

    summaries_op = tf.summary.merge_all('train')
    summary_writer = tf.summary.FileWriter(folder_summary, sess.graph)

    count = 0
    for i in range(50000):
        g_step, rate ,summary = sess.run([add_global, learning_rate,summaries_op])
        if g_step == 10000:
            _,initial = sess.run([reset_global,initial_decay])
            count+=1
        summary_writer.add_summary(summary,g_step+(count*10000))

    print('done')

 

如果将以下内容注释可以得到以下learning rate

 

        # if g_step == 10000:
        #     _ = sess.run([reset_global])
        #     count+=1
        #summary_writer.add_summary(summary,g_step+(count*10000))
        summary_writer.add_summary(summary,g_step)

 

其他更多样式可以自己实现

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值