keras.callbacks.LearningRateScheduler(schedule, verbose=0)
参数
- schedule: 一个函数,接受epoch作为输入(整数,从 0 开始迭代) 然后返回一个学习速率作为输出(浮点数)。
- verbose: 整数。 0:安静,1:更新信息。
例子:
from keras.models import Sequential
import numpy as np
from keras.layers import Dense
import keras
import keras.backend as K
def scheduler(epoch):
lr = 0.1
if epoch > 12:
lr *= 0.5e-3
elif epoch > 10:
lr *= 1e-3
elif epoch > 8:
lr *= 1e-2
elif epoch > 4:
lr *= 1e-1
print('Learning rate: ', lr)
return lr
model = Sequential()
model.add(Dense(1, input_shape=(1, )))
model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
reduce_lr = keras.callbacks.LearningRateScheduler(scheduler)
x = np.linspace(-2, 2, 400)
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (400, ))
X_train, Y_train = x[:300], y[:300]
X_test, Y_test = x[300:], y[300:]
model.fit(X_train, Y_train, batch_size=10, epochs=15, validation_data=(X_test, Y_test), verbose=0, callbacks=[reduce_lr])
输出为
Learning rate: 0.1
Learning rate: 0.1
Learning rate: 0.1
Learning rate: 0.1
Learning rate: 0.1
Learning rate: 0.01
Learning rate: 0.01
Learning rate: 0.01
Learning rate: 0.01
Learning rate: 0.001
Learning rate: 0.001
Learning rate: 0.0001
Learning rate: 0.0001
Learning rate: 5e-05
Learning rate: 5e-05
但是scheduler函数指定了lr的值,如果model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))修改了学习率,那么scheduler函数里的值也要修改,所以scheduler函数里的lr可以从模型获取就好了。但是scheduler函数只有一个参数epoch。所以可以使用闭包。
代码如下
from keras.models import Sequential
import numpy as np
from keras.layers import Dense
import keras
import keras.backend as K
def temp(model):
def scheduler(epoch):
lr = K.get_value(model.optimizer.lr)
if epoch == 13:
lr *= 0.5
elif epoch == 11:
lr *= 0.1
elif epoch == 9:
lr *= 0.1
elif epoch == 5:
lr *= 0.1
print(lr)
return lr
return scheduler
model = Sequential()
model.add(Dense(1, input_shape=(1, )))
model.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
re_l = temp(model)
reduce_lr = keras.callbacks.LearningRateScheduler(re_l)
x = np.linspace(-2, 2, 400)
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (400, ))
X_train, Y_train = x[:300], y[:300]
X_test, Y_test = x[300:], y[300:]
model.fit(X_train, Y_train, batch_size=10, epochs=15, validation_data=(X_test, Y_test), verbose=0, callbacks=[reduce_lr])
输出一样