TensorBoard对训练过程可视化(不用tf.summaries.merge_all)

目前接触到的TensorBoard可视化教程中都使用了tf.summaries.merge_all,但是我只想跟踪训练loss和测试集accuracy,而且测试集accuracy的更新频率要低很多,所以再使用tf.summaries.merge_all就不是很方便了.因此需要分开run定义的tf.summary.scalar().

基于MNIST的MLP实现代码如下:

# -*- coding: utf-8 -*-

import time
start =time.clock()

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_DATA/", one_hot = True)
import tensorflow as tf  
tf.reset_default_graph() 

sess = tf.InteractiveSession()

log_dir = './logs'
in_units = 784
h1_units = 300
W1 = tf.Variable(tf.truncated_normal(shape = [in_units, h1_units], mean = 0, stddev = 0.1))
tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(0.001)(W1))
b1 = tf.Variable(tf.zeros(shape = [h1_units]))
W2 = tf.Variable(tf.zeros(shape = [h1_units,10]))
tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(0.001)(W2))
b2 = tf.Variable(tf.zeros(shape = [10]))

x = tf.placeholder(dtype = tf.float32, shape = [None, in_units])
keep_prob = tf.placeholder(dtype = tf.float32)

h1 = tf.nn.relu(tf.matmul(x,W1)+b1)
h1_drop = tf.nn.dropout(h1, keep_prob)
y = tf.nn.softmax(tf.matmul(h1_drop, W2)+b2)
y_ =tf.placeholder(tf.float32, [None,10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
tf.add_to_collection('losses', cross_entropy)

total_loss = tf.add_n(tf.get_collection('losses'))
cross_entropy_sum = tf.summary.scalar('cross_entropy', cross_entropy)
#cross_entropy_sum_merge = tf.summary.merge(cross_entropy_sum)
#cross_entropy = (-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[0,1]))

train_step = tf.train.AdagradOptimizer(0.2).minimize(total_loss)


if_correct = tf.equal(tf.argmax(y,1), tf.argmax(y_, 1))

acc = tf.reduce_mean(tf.cast(if_correct, tf.float32))
acc_sum = tf.summary.scalar('acc', acc)
#acc_sum_merge = tf.summary.merge(acc_sum)


train_writer = tf.summary.FileWriter(log_dir + '/train', sess.graph)
saver = tf.train.Saver()
#merged = tf.summary.merge_all()


tf.global_variables_initializer().run()
for i in range(20000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    _, sum1 = sess.run([train_step, cross_entropy_sum], feed_dict={x: batch_xs, y_: batch_ys, keep_prob: 0.75})
    train_writer.add_summary(sum1, i)

    
    if i%100 == 1:
           sum2 = sess.run(acc_sum, feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
           train_writer.add_summary(sum2, i)
#           saver.save(sess, log_dir+'/model.ckpt', i)
           
train_writer.close()


print(acc.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

end = time.clock()
print('Running time: %s Seconds'%(end-start))
#
#$ tensorboard --logdir=/home/cheng/AnacondaProjects/learn_tf/tensorboard/logs/train
#浏览器输入:localhost:6006
输出结果为:


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值