TensorFlow学习笔记——利用tensorboard进行可视化

0 导读

作为研究狗,学界普遍认为pytorch会比tensorflow代码简介,其实这些都只不过是工具罢了,好好啃通核心思想,应该大同小异。回到tensorboard,验证一个算法的效果好坏以及优化的潜能,其中一个常见的方法是,我们必须得关注loss的收敛情况,当面对大量的loss、acc时,单纯的文本数字并不能直观决策出模型的优劣。因此,我们非常有必要对loss、acc等参数绘图。然而,TensorFlow早已为我等封装好模型可视化的API了,接下来要分享的tensorboard将从模型节点计算图模型参数展开阐述,并附上完整详细代码供参考。

1 模型节点计算图可视化

1.1 代码实现

以下代码将基于一个单层全连接神经网络的手写数字识别分类为例:

"""
**基于CNN单层网络手写体分类实例,利用tensorboard进行可视化计算图
@author: <Colynn Johnson>
@direct: https://zhuanlan.zhihu.com/p/71328244
@date: 2020-08-27
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets('MNIST_data',one_hot=True)

batch_size = 100
n_batch = mnist.train.num_examples // batch_size

with tf.name_scope('input'):
    x = tf.placeholder(dtype=tf.float32, shape=[None, 784], name='x_input')
    y = tf.placeholder(dtype=tf.float32, shape=[None, 10], name='y_input')

with tf.name_scope('layer'):
    with tf.name_scope('weights'):
        # tf.random_uniform()The default is to generate random Numbers between 0 and 1.
        w = tf.Variable(tf.random_uniform([784, 10]), name='w')
with tf.name_scope('biases'):
    b = tf.Variable(tf.zeros(shape=[10], dtype=tf.float32), name='b')
with tf.name_scope('softmax'):
    # tf.nn.xw_plus_b(x, w, b): Compute matmul(x, weights)+biases
    prediction = tf.nn.softmax(tf.nn.xw_plus_b(x, w, b))
with tf.name_scope('Loss'):
    # tf.reduce_mean(): Compute the mean of elements across of a tensor.
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
with tf.name_scope('acc'):
    # equal(x, y, name=None): Element by element, if it is True; if it's not, it's False.
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))
    acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    # Specifies a file use to save the diagram.
    writer = tf.summary.FileWriter('logdir/', sess.graph)
    for epoch in range(20):
        for batch in range(n_batch):
            batch_x, batch_y = mnist.train.next_batch(batch_size)
            _, accuracy = sess.run([train_step, acc], feed_dict={x: batch_x, y: batch_y})
            if batch % 50 ==0:
                print("### Epoch: {0}, batch: {1} acc on train: {2}".format(epoch, batch, accuracy))
                accuracy = sess.run(acc, feed_dict={x: mnist.test.images, y: mnist.test.labels})
                print("### Epoch {0}, acc on test: {1}".format(epoch, accuracy))

1.2 运行结果

在这里插入图片描述

1.3 模型计算图可视化

  • 终端执行命令行: tensorboard --logdir=logdir
  • 自定义端口:tensorboard --logdir=logdir --port=6007避免端口冲突

在这里插入图片描述

ctrl+左击跳转tensorboard界面

在这里插入图片描述
其中,图中灰色的圆角矩形就是代码中声明的命名空间tf.name_scope(),而且命名空间是可以嵌套定义的。计算图中利于我们观察各个操作的详细信息,以及数据量的形状和流向。

实现可视化的代码:writer = tf.summary.FileWriter('logdir/', sess.graph)
.
执行后在指定的./logdir/文件夹下将生成events.out.tfevents.1598534511文件。

2 模型参数可视化

2.1 完整代码实现

"""
**基于CNN单层网络手写体分类实例,利用tensorboard进行可视化计算图
@author: <Colynn Johnson>
@direct: https://zhuanlan.zhihu.com/p/71328244
@date: 2020-08-27
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets('MNIST_data',one_hot=True)

batch_size = 100
n_batch = mnist.train.num_examples // batch_size

#==================================Parametric Transformation curve参数变换曲线========================================
def variable_summaries(var):
    with tf.name_scope('summaries'):
        mean = tf.reduce_mean(var)
        # tf.summary.scalar(name,tensor): Use to display scalar information.显示标量信息
        tf.summary.scalar('mean', mean)
        with tf.name_scope('stddev'):
            # stddev: Standard Deviation标准差
            stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
        tf.summary.scalar('stddev', stddev)
        tf.summary.scalar('max', tf.reduce_max(var))
        tf.summary.scalar('min', tf.reduce_min(var))
        #输出一个直方图的Summary protocol buffer .
        tf.summary.histogram('histogram', var)
#==================================Parametric Transformation curve参数变换曲线========================================

with tf.name_scope('input'):
    x = tf.placeholder(dtype=tf.float32, shape=[None, 784], name='x_input')
    y = tf.placeholder(dtype=tf.float32, shape=[None, 10], name='y_input')

with tf.name_scope('layer'):
    with tf.name_scope('weights'):
        # tf.random_uniform()The default is to generate random Numbers between 0 and 1.
        W = tf.Variable(tf.random_uniform([784, 10]), name='w')
        variable_summaries(W)
    with tf.name_scope('biases'):
        b = tf.Variable(tf.zeros(shape=[10], dtype=tf.float32), name='b')
        variable_summaries(b)
    with tf.name_scope('softmax'):
        # tf.nn.xw_plus_b(x, w, b): Compute matmul(x, weights)+biases
        prediction = tf.nn.softmax(tf.nn.xw_plus_b(x, W, b))
with tf.name_scope('Loss'):
    # tf.reduce_mean(): Compute the mean of elements across of a tensor.
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
    tf.summary.scalar('loss', loss)
with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
with tf.name_scope('acc'):
    # equal(x, y, name=None): Element by element, if it is True; if it's not, it's False.
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))
    acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    tf.summary.scalar('acc', acc)

#tf.summary.merge_all() 可以将所有summary全部保存到磁盘,以便tensorboard显示。如果没有特殊要求,
# 一般用这一句就可一显示训练时的各种信息了。
merged = tf.summary.merge_all()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    # Specifies a file use to save the diagram.
    writer = tf.summary.FileWriter('logdir/', sess.graph)
    for epoch in range(20):
        for batch in range(n_batch):
            batch_x, batch_y = mnist.train.next_batch(batch_size)
            _, summary, accuracy = sess.run([train_step, merged, acc], feed_dict={x: batch_x, y: batch_y})
            if batch % 50 ==0:
                print("### Epoch: {0}, batch: {1} acc on train: {2}".format(epoch, batch, accuracy))
                accuracy = sess.run(acc, feed_dict={x: mnist.test.images, y: mnist.test.labels})
                print("### Epoch {0}, acc on test: {1}".format(epoch, accuracy))
            writer.add_summary(summary, epoch*n_batch+batch)

2.2 可视化结果分析

可视化网络计算图不是太有意义,而更有意义的是在训练网络的同时能够看到一些参数的变换曲线图(如:准确率,损失等),以便于更好的分析网络。
在这里插入图片描述

代码已验证无误,请放心学习参考!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值