keras train_on_batch中合理使用callback进行tensorboard可视化

train_on_batch

returns

Scalar training loss
(if the model has a single output and no metrics)
or list of scalars (if the model has multiple outputs
and/or metrics). The attribute model.metrics_names will give you
the display labels for the scalar outputs.

如果模型的输出是单一的并且没有指定指标,那么返回这段训练的损失;
如果模型有多种输出或者多个指标,返回列表
属性model.metrics_names将为您提供标量输出的标签。(可视化时使用)

keras metrics

https://keras.io/metrics/
在这里插入图片描述
metrics[0]是loss,如果metrics中指定的指标与loss函数相同,打印:
在这里插入图片描述

from keras.callbacks import TensorBoard

部分参数:

log_dir: the path of the directory where to save the log
files to be parsed by TensorBoard.
histogram_freq: frequency (in epochs) at which to compute activation
and weight histograms for the layers of the model. If set to 0,
histograms won’t be computed. Validation data (or split) must be
specified for histogram visualizations.
write_graph: whether to visualize the graph in TensorBoard.
The log file can become quite large when
write_graph is set to True.
write_grads: whether to visualize gradient histograms in TensorBoard.
histogram_freq must be greater than 0.
batch_size: size of batch of inputs to feed to the network
for histograms computation.
write_images: whether to write model weights to visualize as
image in TensorBoard.

use

    board = TensorBoard(log_dir=tb_log, histogram_freq=2, batch_size=batch_size, write_images=True)
    board.set_model(model)

实践

def named_logs(metrics_names, logs):
    result = {
   }
    for log in zip(metrics_names, logs):
        result[log[0]] = log[1]
    return result

    loss_file = open(os.path.join(workspace, "loss_file.txt"), 'w+'<
  • 3
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值