Tensorboard可视化实验

实验一:可视化之Graph

对mnist_deep.py代码做适当修改,使用tensorboard查看CNN的整体运行框架图。

def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one more layer and return the output of this layer
    with tf.name_scope('layer'):
        with tf.name_scope('weights'):
            Weights =weight_variable([in_size,out_size])
        with tf.name_scope('biases'):
            biases = bias_variable([1,out_size])
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights),biases)
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs
#loss函数,交叉熵
with tf.name_scope('loss'):
    cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_*tf.log(prediction),reduction_indices=[1]))
#train方法,梯度下降
with tf.name_scope('train'):
    train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)


实验二:可视化之scalar


#loss函数,交叉熵
with tf.name_scope('loss'):
    cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_*tf.log(prediction),reduction_indices=[1]))
    tf.summary.scalar('loss', cross_entropy)
#train方法,梯度下降
with tf.name_scope('train'):
    train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init=tf.global_variables_initializer()
sess=tf.Session()
merged = tf.summary.merge_all()
sess.run(init)
writer=tf.summary.FileWriter('logs/',sess.graph)
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(50)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys,keep_prob:0.5})
    if i % 50 == 0:
        result=sess.run(merged,feed_dict={x: batch_xs, y_: batch_ys,keep_prob:0.5})
        writer.add_summary(result,i)
        print("step %d, training accuracy %g" % (i, compute_accuracy(
             mnist.train.images, mnist.train.labels)))
print("testing accuracy %g" % compute_accuracy(
     mnist.test.images, mnist.test.labels))

可以得到loss的图像:

实验三:可视化之histogram


with tf.name_scope('weights'):
    #Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
    Weights =weight_variable([in_size,out_size])
    tf.summary.histogram('weights', Weights)
with tf.name_scope('biases'):
    #biases = tf.Variable(tf.constant(0.1,[1, out_size]), name='b')
    biases = bias_variable([1,out_size])
    tf.summary.histogram('biases', biases)

if activation_function is None:
    outputs = Wx_plus_b
else:
    outputs = activation_function(Wx_plus_b)
tf.summary.histogram('outputs', outputs)

下图显示的是全连接层和softmax层的权重、偏置、输出情况。


实验4:改变W的分布

一、截断正态分布

tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)


def weight_variable(shape):
   initial = tf.truncated_normal(shape,stddev=0.1)
   return tf.Variable(initial)

最大步数10000步,运行结果:

二、随机正态分布

tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)


def weight_variable(shape):
   initial = tf.random_normal(shape,stddev=0.1)
   return tf.Variable(initial)

三、均匀分布

tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None)

经验证,此法不可行。

实验5最优化算法的选择

1. tf.train.GradientDescentOptimizer


train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

2.tf.train.AdadeltaOptimizer


train_step=tf.train.AdadeltaOptimizer(0.001).minimize(cross_entropy)


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值