tf.keras.callbacks.TensorBoard(
log_dir='logs', histogram_freq=0, write_graph=True,
write_images=False, update_freq='epoch', profile_batch=2,
embeddings_freq=0, embeddings_metadata=None, **kwargs
)
TensorBoard是由TensorFlow提供的可视化工具。
This callback logs events for TensorBoard, including:
- Metrics summary plots
- Training graph visualization
- Activation histograms
- Sampled profiling
使用pip安装了TensorFlow后,可以通过下面命令启动TensorBoard:
tensorboard --logdir=path_to_your_logs
参数:
- log_dir:保存TensorBoard要解析的日志文件的目录路径。
- histogram_freq:默认为0。计算模型各层的激活值和权重直方图的频率(以epoch计)。如果设置为0,将不会计算直方图。若想直方图可视化,必须指定验证数据(或分割验证集)。
- write_graph:默认为True。是否在TensorBoard中可视化图像。当设置为True时,日志文件会变得非常大。
- write_images:默认为False。是否在 TensorBoard 中将模型权重以图片可视化。
- update_freq:‘batch’或’epoch’或整数。使用’batch’时,在每个batch后将损失和评估值写入TensorBoard。这同样适用’epoch’。如果使用整数,比方说1000,回调将会在每1000个样本后将评估值和损失写入TensorBoard。请注意,过于频繁地写入TensorBoard会降低您的训练速度。
- profile_batch:默认为2。每过多少个batch分析一次Profile。profile_batch必须是非负整数或整数的元组。一对正整数表示要进入Profile的batch的范围。设置profile_batch=0会禁用Profile分析
- embeddings_freq:被选中的嵌入层会被保存的频率(在训练轮中)。如果设置为0,则不会可视化嵌入。
- embeddings_metadata:一个字典,对应层的名字到保存有这个嵌入层元数据文件的名字。 查看 详情 关于元数据的数据格式。 以防同样的元数据被用于所用的嵌入层,字符串可以被传入。
可能引发的异常:
- ValueError:如果设置了histogram_freq且未提供验证数据。
Examples:
基本使用:
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])
# Then run the tensorboard command to view the visualizations.
Custom batch-level summaries in a subclassed Model:
class MyModel(tf.keras.Model):
def build(self, _):
self.dense = tf.keras.layers.Dense(10)
def call(self, x):
outputs = self.dense(x)
tf.summary.histogram('outputs', outputs)
return outputs
model = MyModel()
model.compile('sgd', 'mse')
# Make sure to set `update_freq=N` to log a batch-level summary every N batches.
# In addition to any `tf.summary` contained in `Model.call`, metrics added in
# `Model.compile` will be logged every N batches.
tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1)
model.fit(x_train, y_train, callbacks=[tb_callback])
Custom batch-level summaries in a Functional API Model:
def my_summary(x):
tf.summary.histogram('x', x)
return x
inputs = tf.keras.Input(10)
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Lambda(my_summary)(x)
model = tf.keras.Model(inputs, outputs)
model.compile('sgd', 'mse')
# Make sure to set `update_freq=N` to log a batch-level summary every N batches.
# In addition to any `tf.summary` contained in `Model.call`, metrics added in
# `Model.compile` will be logged every N batches.
tb_callback = tf.keras.callbacks.TensorBoard('./logs', update_freq=1)
model.fit(x_train, y_train, callbacks=[tb_callback])
Profiling:
# Profile a single batch, e.g. the 5th batch.
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir='./logs', profile_batch=5)
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])
# Profile a range of batches, e.g. from 10 to 20.
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir='./logs', profile_batch=(10,20))
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])