train_on_batch
returns
Scalar training loss
(if the model has a single output and no metrics)
or list of scalars (if the model has multiple outputs
and/or metrics). The attributemodel.metrics_names
will give you
the display labels for the scalar outputs.
如果模型的输出是单一的并且没有指定指标,那么返回这段训练的损失;
如果模型有多种输出或者多个指标,返回列表
属性model.metrics_names
将为您提供标量输出的标签。(可视化时使用)
keras metrics
https://keras.io/metrics/
metrics[0]是loss,如果metrics中指定的指标与loss函数相同,打印:
from keras.callbacks import TensorBoard
部分参数:
log_dir: the path of the directory where to save the log
files to be parsed by TensorBoard.
histogram_freq: frequency (in epochs) at which to compute activation
and weight histograms for the layers of the model. If set to 0,
histograms won’t be computed. Validation data (or split) must be
specified for histogram visualizations.
write_graph: whether to visualize the graph in TensorBoard.
The log file can become quite large when
write_graph is set to True.
write_grads: whether to visualize gradient histograms in TensorBoard.
histogram_freq
must be greater than 0.
batch_size: size of batch of inputs to feed to the network
for histograms computation.
write_images: whether to write model weights to visualize as
image in TensorBoard.
use
board = TensorBoard(log_dir=tb_log, histogram_freq=2, batch_size=batch_size, write_images=True)
board.set_model(model)
实践
def named_logs(metrics_names, logs):
result = {
}
for log in zip(metrics_names, logs):
result[log[0]] = log[1]
return result
loss_file = open(os.path.join(workspace, "loss_file.txt"), 'w+'<