TensorBoard
通过结合TF的TensorBoard将网络结构以及运行时状态可视化
from keras.callbacks import TensorBoard
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.1,
callbacks=[predictEpochCallback,TensorBoard(log_dir='./log')])
运行TensorBoard查看日志中的运行信息:
tensorboard --logdir=./log
自定义Callback
class PredictEpochCallback(Callback):
def on_train_begin(self, logs={}):
print("begin train epoch")
def on_epoch_end(self, epoch, logs={}):
for seq_index in range(10):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
print('input_seq.shape')
print(input_seq.shape)
decoded_sentence = decode_sequence(input_seq)
print('epoch i='+str(epoch)+' input='+input_texts[seq_index]+' decode output='+decoded_sentence)
# save whole model
print('save s2s h5')
model.save('s2s.h5.'+str(epoch))
model_json = model.to_json()
with open("model.json."+str(epoch), "w") as json_file:
print('save s2s json')
json_file.write(model_json)
ReduceLROnPlateau
keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0)当评价指标不在提升时,减少学习率
当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在patience个epoch中看不到模型性能提升,则减少学习率
参数monitor:被监测的量
factor:每次减少学习率的因子,学习率将以lr = lr*factor的形式被减少
patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
mode:‘auto’,‘min’,‘max’之一,在min模式下,如果检测值触发学习率减少。在max模式下,当检测值不再上升则触发学习率减少。
epsilon:阈值,用来确定是否进入检测值的“平原区”
cooldown:学习率减少后,会经过cooldown个epoch才重新进行正常操作
min_lr:学习率的下限
使用:reduce_lr_callback = ReduceLROnPlateau(monitor='loss', factor=0.5,
patience=1, min_lr=0.00001)
参考:http://keras-cn.readthedocs.io/en/latest/other/callbacks/