在https://blog.csdn.net/ShenWeiKKX/article/details/101112503我们使用tensorboard展示了网络结构,本文介绍利用tensorboard记录数据,一般记录数据在训练过程中的变化,如loss,accuracy等等。
import tensorflow as tf
import numpy as np
...
out = tf.layers.conv2d(x,filter_nums,filter_size,use_bias=False,
kernel_initializer=tf.truncated_normal_initializer(),
padding='same',activation=None,name = 'conv1') #函数的name
...
with tf.name_scope('input'): #空间范围的name
image_x = tf.placeholder(imagename_array.dtype,[None],name='image_x') #函数的name
label_y = tf.placeholder(label_agen_array.dtype,[None,7,7,6],name='label_y') #函数的name
istraing = tf.placeholder(tf.bool,name='istraing') #函数的name
...
with tf.name_scope('loss'):
...
with tf.name_scope('total_loss'):
total_loss = loss1 + loss2 + loss3
tf.summary.scalar('total_loss_sca',total_loss) #记录total_loss的scalar图
tf.summary.histogram('total_loss_his',total_loss) #记录total_loss的直方图
...
# 合并所有的summary
merged = tf.summary.merge_all()
...
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter('logs/',sess.graph) #'logs/'是写入路径,sess.graph表示将图写入,所以不需要训练
for epochs in range(1):
summary = sess.run(merged, {istraing:True})
writer.add_summary(summary,epochs) #使epochs为横坐标
...
- 与https://blog.csdn.net/ShenWeiKKX/article/details/101112503的tensorboard网络结构相比,此处的代码多了一下,这些多出的代码正是为了记录数据;
- 接下来的步骤与tensorboard展示网络结构一致;
- 最后打开tensorboard就可以查看这些scalar和histogram图了。