tensorboard用法详解:
使用TensorBoard展示数据,需要在执行Tensorflow就算图的过程中,将各种类型的数据汇总并记录到日志文件中。然后使用TensorBoard读取这些日志文件,解析数据并生产数据可视化的Web页面,让我们可以在浏览器中观察各种汇总数据。
包括标量(scalars),图片(images),音频(Audio),计算图(graph),数据分布,直方图(histograms)等
如何在event文件中添加自己想要可视化的数据:
a.定义summary operation:
tf.scaler_summary:用来添加一些标量,比如 lr,loss,accuracy ,etc
tf.image_summary:用来添加一些进入graph的输入图片
tf.histogram_summary:用来统计激活分布,梯度分布,权重分布
tf.audio_summary:
比如:
tf.scalar_summary('标签',想要记录的变量)
b.定义一个op来将所有的summary operation 合并起来
merged = tf.merge_all_summaries()
c.使用graph初始化一个summary_writer
train_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/train',sess.graph)
d.每隔n step将summary写入
summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
train_writer.add_summary(summary, i)
基于tensorflow0.11和1.7版本测试,记录下两个版本不同的API,
参考:http://blog.csdn.NET/edwards_june/article/details/65652385
改进:
tf.merge_all_summaries() 改为:summary_op = tf.summary.merge_all()
tf.train.SummaryWriter 改为:tf.summary.FileWriter
tf.scalar_summary 改为:tf.summary.scalar#标量(scalars)
histogram_summary 改为:tf.summary.histogram #直方图(histograms)
1.变量则可使用Tensorflow.summary.histogram()方法:
tf.histogram_summary(layer_name+"/weights",Weights) #name命名,Weights赋值
2.常量则可使用Tensorflow.summary.scalar()方法:
tf.scalar_summary('loss',loss) #命名和赋值
3.最后需要整合和存储FileWriter:
merged = tf.summary.merge_all()
4.选定可视化存储目录:
writer = tf.summary.FileWriter"/目录",sess.graph)
5.运行
result = sess.run(merged) #merged也是需要run的
writer.add_summary(result,i)#i---运行的次数
测试代码:
import tensorflow as tf#1.7版本
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelBinarizer
# load data
digits = load_digits()
X = digits.data
y = digits.target
y = LabelBinarizer().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)
def add_layer(inputs, in_size, out_size, layer_name, activation_function=None, ):
# add one more layer and return the output of this layer
Weights = tf.Variable(tf.random_normal([in_size, out_size]))
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )
Wx_plus_b = tf.matmul(inputs, Weights) + biases
# here to dropout
Wx_plus_b = tf.nn.dropout(Wx_plus_b, keep_prob)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b, )
tf.summary.histogram(layer_name + '/outputs', outputs)
return outputs
# define placeholder for inputs to network
keep_prob = tf.placeholder(tf.float32)
xs = tf.placeholder(tf.float32, [None, 64]) # 8x8
ys = tf.placeholder(tf.float32, [None, 10])
# add output layer
l1 = add_layer(xs, 64, 50, 'l1', activation_function=tf.nn.tanh)
prediction = add_layer(l1, 50, 10, 'l2', activation_function=tf.nn.softmax)
# the loss between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
reduction_indices=[1])) # loss
tf.summary.scalar('loss', cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
merged = tf.summary.merge_all()##合并summary
#summary writer goes in here
train_writer = tf.summary.FileWriter("logs/train", sess.graph)
test_writer = tf.summary.FileWriter("logs/test", sess.graph)
sess.run(tf.initialize_all_variables())
for i in range(500):
# here to determine the keeping probability
sess.run(train_step, feed_dict={xs: X_train, ys: y_train, keep_prob: 0.5})#保持50%的概率不被dropout掉
if i % 50 == 0:
# # record loss
train_result = sess.run(merged, feed_dict={xs: X_train, ys: y_train, keep_prob: 1})
test_result = sess.run(merged, feed_dict={xs: X_test, ys: y_test, keep_prob: 1})
train_writer.add_summary(train_result, i)
test_writer.add_summary(test_result, i)
运行后,会在相应的目录里生成一个文件,执行:
windows用户需要使用$activate TensorFlow
$tensorboard --logdir="/目录"(logs)