更新时间:2017.7.4
- tensorflow可视化一些API和老版本有了很大的不同,这里更新了API,以最新的API为主
- 调整了文章架构
更新时间:2019.5.15
- API相比之前变更很大,修改为最新的API
- 简化例子
tensorflow的可视化API主要是在tf.summary下面,帮助文档地址:tf.summary
一.查看计算图
Ⅰ.tf.summary.FileWriter类
简单来说,这个类就是把我们项目中的事件写到文件里面去,之后我们可以通过这个文件可视化。
下面列举一下这个类的常见成员函数
构造函数
init(logdir,graph=None,max_queue=10,flush_secs=120,graph_def=None,filename_suffix=None,session=None)
作用:构造函数,创建一个FileWriter
对象和一个事件(event)文件. 如果你传入一个Graph到构造函数里面去,那么这个图就会被加到事件文件中.(相当于在后面调用add_graph()函数.)TensorBoard会在事件文件中选择相应的图,并且通过图形的方式展示出来.
参数:
- logdir: A string.表示事件文件要被写到的目录
- graph: Graph 对象,
- max_queue: Integer. Size of the queue for pending events and summaries.
- flush_secs: Number. How often, in seconds, to flush the pending events and summaries to disk.
- graph_def: 已经弃用
- filename_suffix: A string. 每个事件文件的后缀.
- session: A tf.Session object. See details above.
add_event(event)
作用:添加一个事件到事件文件中
参数:
event: 一个事件的 protocol buffer.
add_graph(graph,global_step=None,graph_def=None)
作用:添加一个 Graph到事件文件中 to the event file.一般是直接传入一个graph到构造函数里面去.
参数:
graph: A Graph object, such as sess.graph.
global_step: Number. Optional global step counter to record with the graph.
graph_def: 弃用
add_summary(summary,global_step=None)
作用:添加Summary protocol buffer到事件文件中.
参数:
summary: 一个Summary的 protocol buffer.
global_step: 数字,可选,表示和summary一起记录的去全局步进值(global step value )
close()
作用:把事件文件写到硬盘,并且关闭文件.一般是在不再需要writer的时候调用这个函数
二.Summary Operations
你可以通过summary的操作取得一个session里面的输出,然后把它传递到SummaryWriter对象并且附加到事件文件中。你能够通过tensorboard来可视化事件文件中的内容。
这里只讲其中的标量函数和配套的函数,其他的用法流程都是大同小异的,可以参考文档来使用.
Ⅰ.标量汇总
tf.summary.scalar(name,tensor,collections=None)
作用:输出一个汇总的protocol buffer,其中包含一个标量值
参数:
name: 名称.
tensor: 一个实数值tensor,包含单个值.
collections: Optional list of graph collections keys. The new summary op is added to these collections. Defaults to [GraphKeys.SUMMARIES].
Ⅱ.融合所有
tf.summary.merge_all(key=tf.GraphKeys.SUMMARIES)
作用:把在图中收集的所有summaries融合起来.
参数:
key: GraphKey used to collect the summaries. 默认是GraphKeys.SUMMARIES.
三.例子
1.查看graph
这里用上次写过的Kaggle Digit Recognizer的代码为例子来说明使用。
代码:
from __future__ import print_function,division
import numpy as np
import data
import tensorflow as tf
class Network():
def __init__(self,num1,num2):
# how many neurals in hidden layer,list
self.num1=num1
self.num2=num2
#graph
self.graph=tf.Graph()
#define framework at the begining
self.define_framwork()
#session
self.session=tf.Session(graph=self.graph)
def define_framwork(self):
'''
#define framework of net
:return:
'''
with self.graph.as_default():
# fully connected layer1(connect input)
# weights(random) and biases(0.1)
self.fc1_weights=tf.Variable(initial_value=tf.random_normal(shape=(784, self.num1)))
self.fc1_biases=tf.constant(value=0.1,shape=(self.num1,))
# fully connected layer2
# weights(random) and biases(0.1)
self.fc2_weights=tf.Variable(initial_value=tf.random_normal(shape=(self.num1, self.num2)))
self.fc2_biases=tf.constant(value=0.1, shape=(self.num2,))
# fully connected layer n(output layer)
# weights(random) and biases(0.1)
self.fc3_weights=tf.Variable(initial_value=tf.random_normal(shape=(self.num2,10)))
self.fc3_biases=tf.constant(value=0.1,shape=(10,))
def forward(self,samples):
'''
# forward computation and return logits
:param samples:(batch_size,784)
:return:
'''
# we get batch_size x num1 matrix
hidden_value1=tf.nn.relu(tf.matmul(samples,self.fc1_weights)+self.fc1_biases)
hidden_value2 = tf.nn.relu(tf.matmul(hidden_value1, self.fc2_weights) + self.fc2_biases)
# we get batch_size x 10 matrix(logits)
logits=tf.matmul(hidden_value2,self.fc3_weights)+self.fc3_biases
return logits
def train(self,epochs,train_batch_size,rate):
'''
#train net
:param epochs:
:param train_batch_size:
:param rate:
:return:
'''
#define graph
with self.graph.as_default():
# data part
train_batch_samples = tf.placeholder(dtype=tf.float32, shape=(train_batch_size, 784))
train_batch_labels = tf.placeholder(dtype=tf.float32, shape=(train_batch_size, 10))
train_batch_logits=self.forward(train_batch_samples)
# train_loss
# through softmax_...._logits,we get tensor of shape(batchs,)
# through reduce_mean(),we get a "number"
train_batch_loss=tf.reduce_mean(tf.nn.softm