keras LearningRateScheduler 使用

keras.callbacks.LearningRateScheduler(schedule, verbose=0)

参数

  • schedule: 一个函数,接受epoch作为输入(整数,从 0 开始迭代) 然后返回一个学习速率作为输出(浮点数)。
  • verbose: 整数。 0:安静,1:更新信息。

例子:

from keras.models import Sequential
import numpy as np
from keras.layers import Dense
import keras
import keras.backend as K


def scheduler(epoch):
    lr = 0.1
    if epoch > 12:
        lr *= 0.5e-3
    elif epoch > 10:
        lr *= 1e-3
    elif epoch > 8:
        lr *= 1e-2
    elif epoch > 4:
        lr *= 1e-1
    print('Learning rate: ', lr)
    return lr

model = Sequential()
model.add(Dense(1, input_shape=(1, )))
model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
reduce_lr = keras.callbacks.LearningRateScheduler(scheduler)
x = np.linspace(-2, 2, 400)
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (400, ))
X_train, Y_train = x[:300], y[:300]     
X_test, Y_test = x[300:], y[300:]   
model.fit(X_train, Y_train, batch_size=10, epochs=15, validation_data=(X_test, Y_test), verbose=0, callbacks=[reduce_lr])

输出为

Learning rate:  0.1
Learning rate:  0.1
Learning rate:  0.1
Learning rate:  0.1
Learning rate:  0.1
Learning rate:  0.01
Learning rate:  0.01
Learning rate:  0.01
Learning rate:  0.01
Learning rate:  0.001
Learning rate:  0.001
Learning rate:  0.0001
Learning rate:  0.0001
Learning rate:  5e-05
Learning rate:  5e-05

但是scheduler函数指定了lr的值,如果model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))修改了学习率,那么scheduler函数里的值也要修改,所以scheduler函数里的lr可以从模型获取就好了。但是scheduler函数只有一个参数epoch。所以可以使用闭包。

代码如下

from keras.models import Sequential
import numpy as np
from keras.layers import Dense
import keras
import keras.backend as K

def temp(model):
    def scheduler(epoch):
        lr = K.get_value(model.optimizer.lr)
        if epoch == 13:
            lr *= 0.5
        elif epoch == 11:
            lr *= 0.1
        elif epoch == 9:
            lr *= 0.1
        elif epoch == 5:
            lr *= 0.1
        print(lr)
        return lr
    return scheduler

model = Sequential()
model.add(Dense(1, input_shape=(1, )))
model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
re_l = temp(model)
reduce_lr = keras.callbacks.LearningRateScheduler(re_l)
x = np.linspace(-2, 2, 400)
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (400, ))
X_train, Y_train = x[:300], y[:300]     
X_test, Y_test = x[300:], y[300:]   
model.fit(X_train, Y_train, batch_size=10, epochs=15, validation_data=(X_test, Y_test), verbose=0, callbacks=[reduce_lr])

输出一样

 

  • 6
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

忧郁的常凯申

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值