TensorBoard数据可视化

Tensorboard可以将模型训练的过程中的各种汇总数据展示出来,包括标量(Scalars)、图片(image)、音频(Audio)、计算图(Graphs)、数据分布(Distributions)、直方图(Histograms)和嵌入向量(Embeddings)。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import os
import sys

import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data

#使用input_data.read_data_sets 下载MNIST数据,并创建Tensorflow的默认Session。

FLAGS = None


def train():
  # Import data
  mnist = input_data.read_data_sets(FLAGS.data_dir,
                                    fake_data=FLAGS.fake_data)

  sess = tf.InteractiveSession()

  """
  为了在tensorboard中展示节点名称,我们设计网络时会经常使用with tf.name_scope 限定命名空间,在这个with下的所有节点都会被自动命名为input/xxx这样的格式。下面定义输入x和y的placeholder,并将输入的一维数据变形为28x28的图片储存到另一个tensor,这样就可以使用tf.summary.image将图片数据汇总给tensorboard展示了。
  """
  # Create a multilayer model.

  # Input placeholders
  with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, [None, 784], name='x-input')
    y_ = tf.placeholder(tf.int64, [None], name='y-input')

  with tf.name_scope('input_reshape'):
    image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
    tf.summary.image('input', image_shaped_input, 10)
    """
    同时,定义神经网络模型参数的初始化方法,权重依然使用我们常用的truncated_normal进行初始化,偏置则赋值为0.1
    """
  # We can't initialize these variables to 0 - the network will get stuck.
  def weight_variable(shape):
    """Create a weight variable with appropriate initialization."""
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

  def bias_variable(shape):
    """Create a bias variable with appropriate initialization."""
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)
    """
    再定义对Variable变量的数据汇总函数,我们计算出Variable的mean、stddev、max和min,对这些标量数据使用tf.summary.scalar进行记录和汇总。同时,使用tf.summary.histogram直接记录变量var的直方图数据。
    """
  def variable_summaries(var):
    """Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
    with tf.name_scope('summaries'):
      mean = tf.reduce_mean(var)
      tf.summary.scalar('mean', mean)
      with tf.name_scope('stddev'):
        stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
      tf.summary.scalar('stddev', stddev)
      tf.summary.scalar('max', tf.reduce_max(var))
      tf.summary.scalar('min', tf.reduce_min(var))
      tf.summary.histogram('histogram', var)
"""
设计的MLP多层神经网络来训练数据,在每一层中都会对模型参数进行数据汇总。因此,我们定义创建一层神经网络并进行数据汇总的函数nn_layer。这个函数的输入参数有输入数据input_tensor、输入的维度input_dim、输出的维度output_dim和层名layer_name,激活函数act则默认使用ReLU。在函数内,先是初始化这层神经网络的权重和偏置,并使用前面定义的variable_summaries对variable进行数据汇总。然后对输入做矩阵乘法并加偏置,再将未进行激活的结果使用tf.summary.histogram统计直方图。同时,在使用激活函数后,在使用tf.summary.histogram统计一次。
"""
  def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
    """Reusable code for making a simple neural net layer.

    It does a matrix multiply, bias add, and then uses ReLU to nonlinearize.
    It also sets up name scoping so that the resultant graph is easy to read,
    and adds a number of summary ops.
    """
    # Adding a name scope ensures logical grouping of the layers in the graph.
    with tf.name_scope(layer_name):
      # This Variable will hold the state of the weights for the layer
      with tf.name_scope('weights'):
        weights = weight_variable([input_dim, output_dim])
        variable_summaries(weights)
      with tf.name_scope('biases'):
        biases = bias_variable([output_dim])
        variable_summaries(biases)
      with tf.name_scope('Wx_plus_b'):
        preactivate = tf.matmul(input_tensor, weights) + biases
        tf.summary.histogram('pre_activations', preactivate)
      activations = act(preactivate, name='activation')
      tf.summary.histogram('activations', activations)
      return activations

  hidden1 = nn_layer(x, 784, 500, 'layer1')

"""再创建一个Dropout层,并使用tf.summary.scalar记录keep_prob。"""
  with tf.name_scope('dropout'):
    keep_prob = tf.placeholder(tf.float32)
    tf.summary.scalar('dropout_keep_probability', keep_prob)
    dropped = tf.nn.dropout(hidden1, keep_prob)

  # Do not apply softmax activation yet, see below.
  y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)
"""
这里使用tf.losses.sparse_softmax_cross_entropy()对前面输出层的结果进行Sotfmax处理并计算交叉熵损失cross_entropy,并使用tf.summary.scalar进行统计汇总。
"""
  with tf.name_scope('cross_entropy'):
    # The raw formulation of cross-entropy,
    #
    # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),
    #                               reduction_indices=[1]))
    #
    # can be numerically unstable.
    #
    # So here we use tf.losses.sparse_softmax_cross_entropy on the
    # raw logit outputs of the nn_layer above, and then average across
    # the batch.
    with tf.name_scope('total'):
      cross_entropy = tf.losses.sparse_softmax_cross_entropy(
          labels=y_, logits=y)
  tf.summary.scalar('cross_entropy', cross_entropy)

"""
下面使用Adma优化器对损失进行优化,同时统计预测正确的样本数并计算准确率accuray,在使用tf.summary.scalar对accuracy进行统计汇总。
"""
  with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
        cross_entropy)

  with tf.name_scope('accuracy'):
    with tf.name_scope('correct_prediction'):
      correct_prediction = tf.equal(tf.argmax(y, 1), y_)
    with tf.name_scope('accuracy'):
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  tf.summary.scalar('accuracy', accuracy)

"""
因为之前定义了非常多的tf.summary的汇总操作,逐一执行这些操作太麻烦,所以这里使用tf.summary.merger_all()直接获取所有汇总操作,以便后面执行。定义两个tf.summary.FileWriter(文件记录器)在不同的子目录,分别用来存放训练和测试的日志数据。同时,将Session的计算图sess.graph加入训练过程的记录器,这样在TensorBoard的GRAPHS窗口中就能展示整个计算图的可视化效果。最后使用tf.global_variables_initializer().run()初始化全部变量。
"""
  # Merge all the summaries and write them out to
  # /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)
  merged = tf.summary.merge_all()
  train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)
  test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')
  tf.global_variables_initializer().run()

  # Train the model, and also write summaries.
  # Every 10th step, measure test-set accuracy, and write test summaries
  # All other steps, run train_step on training data, & add training summaries

  def feed_dict(train):
    """Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
    if train or FLAGS.fake_data:
      xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)
      k = FLAGS.dropout
    else:
      xs, ys = mnist.test.images, mnist.test.labels
      k = 1.0
    return {x: xs, y_: ys, keep_prob: k}
"""
首先使用tf.train.Saver()创建模型的保存器,然后进入训练的循环中,每隔10步执行一次merged(数据汇总)、accuracy(求测试集上的预测准确率)操作,并使用test_write.add_sumamry将汇总结果summary和循环步数i写入日志文件;同时每隔100步,使用tf.RunOptions定义TensorFlow运行选项,其中设置trace_level为FULL_TRACE,并使用tf.RunMetadata()定义Tensorflow运行的元信息,这样可以记录训练时运算时间和内存占用等方面的信息。再执行merged数据汇总操作和train_step训练操作,将汇总结果summary和训练元信息run_metadata添加到train_writer。平时,则只执行merged操作和train_step操作,并添加summary到train_writer。所有训练全部结束后,关闭train_writer和test_writer。
"""
  saver = tf.train.Saver()
  for i in range(FLAGS.max_steps):
    if i % 10 == 0:  # Record summaries and test-set accuracy
      summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
      test_writer.add_summary(summary, i)
      print('Accuracy at step %s: %s' % (i, acc))
    else:  # Record train set summaries, and train
      if i % 100 == 99:  # Record execution stats
        run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
        run_metadata = tf.RunMetadata()
        summary, _ = sess.run([merged, train_step],
                              feed_dict=feed_dict(True),
                              options=run_options,
                              run_metadata=run_metadata)
        train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
        train_writer.add_summary(summary, i)
        saver.save(sess, log_dir+'/model.ckpt', i)
        print('Adding run metadata for', i)
      else:  # Record a summary
        summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
        train_writer.add_summary(summary, i)
  train_writer.close()
  test_writer.close()


def main(_):
  if tf.gfile.Exists(FLAGS.log_dir):
    tf.gfile.DeleteRecursively(FLAGS.log_dir)
  tf.gfile.MakeDirs(FLAGS.log_dir)
  train()


if __name__ == '__main__':
  parser = argparse.ArgumentParser()
  parser.add_argument('--fake_data', nargs='?', const=True, type=bool,
                      default=False,
                      help='If true, uses fake data for unit testing.')
  parser.add_argument('--max_steps', type=int, default=1000,
                      help='Number of steps to run trainer.')
  parser.add_argument('--learning_rate', type=float, default=0.001,
                      help='Initial learning rate')
  parser.add_argument('--dropout', type=float, default=0.9,
                      help='Keep probability for training dropout.')
  parser.add_argument(
      '--data_dir',
      type=str,
      default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),
                           'tensorflow/mnist/input_data'),
      help='Directory for storing input data')
  parser.add_argument(
      '--log_dir',
      type=str,
      default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),
                           'tensorflow/mnist/logs/mnist_with_summaries'),
      help='Summaries log directory')
  FLAGS, unparsed = parser.parse_known_args()
  tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

之后切换到Linux命令行下,执行TensorBoard程序,并通过–logdir指定TensorFlow日志路径,然后TensorBoard就可以自动生成所有汇总数据可视化的结果了。

tensorboard --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值