【TensorFlow】TensorBoard的使用(四)

在《【TensorFlow】TensorBoard的使用(三)》中讲述了如何增加TensorBoard所展示的参数。本文旨在讲述如何让TensorBoard帮我们进行参数的选择。

主要增加如下信息:

  1. 设置不同的学习率:for learning_rate in [1E-4, 1E-3, 1E-2]:
  2. 设置不同的全连接层:for use_two_fc in [True, False]:
  3. 设置不同的卷积层:for use_two_conv in [True, False]:
  4. 设置不同的迭代次数:for iter_num in [1000, 2000, 5000]:

根据上述参数的排列组合,可以得到3*2*2*3=36种训练结果。使用TensorBoard可以帮我们进行36种不同参数的选择。

本文在《【TensorFlow】TensorBoard的使用(三)》示例的基础上对TensorBoard进行优化。

示例

实现以上四步优化,代码如下:

mnist_board_4.py:

import os
import tensorflow as tf

LOGDIR = './mnist'

mnist = tf.contrib.learn.datasets.mnist.read_data_sets(train_dir=LOGDIR + 'data', one_hot=True)


# 加上name值,方便在tensorboard里面查看
def conv_layer(input, size_in, size_out, name='conv'):
    # 定义名字作用域
    with tf.name_scope(name):
        w = tf.Variable(tf.truncated_normal([5, 5, size_in, size_out], stddev=0.1), name='W')
        b = tf.Variable(tf.constant(0.1, shape=[size_out]), name='B')
        conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding='SAME')
        act = tf.nn.relu(conv + b)
        # 分布情况:在训练过程中查看分布情况
        tf.summary.histogram('weights', w)
        tf.summary.histogram('biases', b)
        tf.summary.histogram('activations', act)

        return tf.nn.max_pool(act, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')


def fc_layer(input, size_in, size_out, name='fc'):
    with tf.name_scope(name):
        w = tf.Variable(tf.truncated_normal([size_in, size_out], stddev=0.1), name='W')
        b = tf.Variable(tf.constant(0.1, shape=[size_out]), name='B')
        act = tf.nn.relu(tf.matmul(input, w) + b)
        tf.summary.histogram('weights', w)
        tf.summary.histogram('biases', b)
        tf.summary.histogram('activations', act)

        return act


def mnist_model(learning_rate, use_two_conv, use_two_fc, iter_num,hparam):
    tf.reset_default_graph()
    sess = tf.Session()

    # setup placeholders, and reshape the data
    x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
    x_image = tf.reshape(x, [-1, 28, 28, 1])
    # 显示当前的输入:数据集中的图像
    tf.summary.image('input', x_image, 3)

    y = tf.placeholder(tf.float32, shape=[None, 10], name='labels')

    if use_two_conv:
        conv1 = conv_layer(x_image, 1, 32, 'conv1')
        conv_out = conv_layer(conv1, 32, 64, 'conv2')

    else:
        conv1 = conv_layer(x_image, 1, 64, 'conv')
        conv_out = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

    flattened = tf.reshape(conv_out, [-1, 7 * 7 * 64])

    if use_two_fc:
        fc1 = fc_layer(flattened, 7 * 7 * 64, 1024, 'fc1')
        embedding_input = fc1
        embedding_size = 1024
        logits = fc_layer(fc1, 1024, 10, 'fc2')

    else:
        embedding_input = flattened
        embedding_size = 7 * 7 * 64
        logits = fc_layer(flattened, 7 * 7 * 64, 10, 'fc')

    with tf.name_scope('loss'):
        xent = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y), name='loss')
        # 指标变化:随着网络的迭代,loss值的变化
        tf.summary.scalar('loss', xent)

    with tf.name_scope('train'):
        train_step = tf.train.AdamOptimizer(learning_rate).minimize(xent)

    with tf.name_scope('accuracy'):
        correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        # 指标变化:随着迭代进行,精度的变化情况
        tf.summary.scalar('accuracy', accuracy)

    # 把所有要显示的参数聚在一起
    summ = tf.summary.merge_all()

    emdedding = tf.Variable(tf.zeros([1024, embedding_size]))
    assignment = emdedding.assign(embedding_input)
    saver = tf.train.Saver()

    sess.run(tf.global_variables_initializer())
    # 保存路径
    tenboard_dir = './tensorboard/test4/'

    # 指定一个文件用来保存图
    writer = tf.summary.FileWriter(tenboard_dir + hparam)
    # 把图add进去
    writer.add_graph(sess.graph)

    for i in range(iter_num):
        batch = mnist.train.next_batch(100)
        # 每迭代5次对结果进行保存
        if i % 5 == 0:
            [train_accuracy, s] = sess.run([accuracy, summ], feed_dict={x: batch[0], y: batch[1]})
            writer.add_summary(s, i)
        sess.run(train_step, feed_dict={x: batch[0], y: batch[1]})


def make_hparam_string(learning_rate, use_two_fc, use_two_conv, iter_num):
    conv_param = 'conv=2' if use_two_conv else 'conv=1'
    fc_param = 'fc=2' if use_two_fc else 'fc=1'
    return 'lr_%.0E,%s,%s,%d' % (learning_rate, conv_param, fc_param, iter_num)


def main():
    # You can try adding some more learning rates
    # 观察参数对结果的影响
    for learning_rate in [1E-4, 1E-3, 1E-2]:

        # Include 'False' as a value to try different model architectures.
        for use_two_fc in [True, False]:
            for use_two_conv in [True, False]:
                for iter_num in [1000, 2000, 5000]:
                    # Construct a hyperparameter string for each one(example: 'lr_1E-3,fc=2,conv=2')
                    hparam = make_hparam_string(learning_rate, use_two_fc, use_two_conv, iter_num)
                    print('Starting run for %s' % hparam)

                    # Actually run with the new settings
                    mnist_model(learning_rate, use_two_fc, use_two_conv, iter_num, hparam)


if __name__ == '__main__':
    main()
tf.reset_default_graph()可以清除上次训练参数生成的图,详情参考:https://blog.csdn.net/duanlianvip/article/details/98626111

展示

在没有显卡的ThinkPad X270笔记本电脑上,运行了9个小时左右,终于运行完了36组参数。

运行TensorBoard,显示如下:

可以勾选上图左侧不同的参数组合,根据右侧曲线可以对参数进行择优选择。

在使用TensorBoader进行参数选择时,acc、loss值是必须展示的,多设置几组参数,作对比。 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值