使用python打印model模型及pb模型的输入输出节点。参考下面的代码:
- The Input Output nodes of the Model model and THE PB model are printed in Python.Refer to the following code:
from tensorflow.python import pywrap_tensorflow
import os
checkpoint_path=os.path.join('checkpoints/crnn/models-19')
reader=pywrap_tensorflow.NewCheckpointReader(checkpoint_path)
var_to_shape_map=reader.get_variable_to_shape_map()
for key in var_to_shape_map:
print('tensor_name: ',key)
output:
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta/Adam_1
tensor_name: beta1_power
tensor_name: vgg/conv4/conv4_2/BatchNorm/moving_variance
tensor_name: fully_connected/fully_connected/weights/Adam_1
tensor_name: vgg/conv1/conv1_1/BatchNorm/beta/Adam
tensor_name: vgg/conv4/conv4_1/BatchNorm/moving_variance
tensor_name: global_step
tensor_name: vgg/conv3/conv3_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv3/conv3_1/BatchNorm/beta/Adam
tensor_name: vgg/conv1/conv1_1/weights/Adam_1
tensor_name: vgg/conv1/conv1_1/weights/Adam
tensor_name: beta2_power
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta/Adam
tensor_name: fully_connected/fully_connected/weights/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel
tensor_name: fully_connected/fully_connected/biases
tensor_name: fully_connected/fully_connected/biases/Adam_1
tensor_name: fully_connected/fully_connected/biases/Adam
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta
tensor_name: fully_connected/fully_connected/weights
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam
tensor_name: vgg/conv3/conv3_2/weights/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam
tensor_name: vgg/conv3/conv3_2/weights/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias
tensor_name: vgg/conv3/conv3_2/BatchNorm/moving_mean
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias
tensor_name: vgg/conv2/conv2_1/BatchNorm/moving_mean
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1
tensor_name: vgg/conv1/conv1_1/BatchNorm/moving_mean
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias
tensor_name: vgg/conv3/conv3_1/BatchNorm/moving_mean
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam
tensor_name: stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1
tensor_name: vgg/conv1/conv1_1/BatchNorm/beta
tensor_name: vgg/conv3/conv3_1/weights/Adam
tensor_name: vgg/conv1/conv1_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv2/conv2_1/BatchNorm/moving_variance
tensor_name: vgg/conv3/conv3_1/BatchNorm/beta
tensor_name: vgg/conv2/conv2_1/weights/Adam
tensor_name: vgg/conv1/conv1_1/BatchNorm/moving_variance
tensor_name: vgg/conv1/conv1_1/weights
tensor_name: vgg/conv2/conv2_1/BatchNorm/beta
tensor_name: vgg/conv2/conv2_1/BatchNorm/beta/Adam
tensor_name: vgg/conv2/conv2_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv3/conv3_1/weights/Adam_1
tensor_name: vgg/conv2/conv2_1/weights
tensor_name: vgg/conv4/conv4_1/BatchNorm/beta/Adam
tensor_name: vgg/conv2/conv2_1/weights/Adam_1
tensor_name: vgg/conv3/conv3_1/weights
tensor_name: vgg/conv3/conv3_1/BatchNorm/moving_variance
tensor_name: vgg/conv3/conv3_2/BatchNorm/moving_variance
tensor_name: vgg/conv3/conv3_2/weights
tensor_name: vgg/conv4/conv4_1/BatchNorm/beta
tensor_name: vgg/conv4/conv4_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv4/conv4_1/BatchNorm/moving_mean
tensor_name: vgg/conv4/conv4_1/weights
tensor_name: vgg/conv4/conv4_1/weights/Adam
tensor_name: vgg/conv4/conv4_1/weights/Adam_1
tensor_name: vgg/conv4/conv4_2/BatchNorm/beta
tensor_name: vgg/conv4/conv4_2/BatchNorm/beta/Adam
tensor_name: vgg/conv4/conv4_2/BatchNorm/beta/Adam_1
tensor_name: vgg/conv4/conv4_2/BatchNorm/moving_mean
tensor_name: vgg/conv4/conv4_2/weights
tensor_name: vgg/conv4/conv4_2/weights/Adam
tensor_name: vgg/conv4/conv4_2/weights/Adam_1
tensor_name: vgg/conv5/conv5_1/BatchNorm/beta
tensor_name: vgg/conv5/conv5_1/BatchNorm/beta/Adam
tensor_name: vgg/conv5/conv5_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv5/conv5_1/BatchNorm/moving_mean
tensor_name: vgg/conv5/conv5_1/BatchNorm/moving_variance
tensor_name: vgg/conv5/conv5_1/weights
tensor_name: vgg/conv5/conv5_1/weights/Adam
tensor_name: vgg/conv5/conv5_1/weights/Adam_1
打印pb文件网络输入输出节点:
-
Network Input/output node of PB file printing:
import tensorflow as tf
import os
model_name = './tmp/frozen_model_crnn.pb'
def create_graph():
with tf.gfile.FastGFile(model_name, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
create_graph()
tensor_name_list = [tensor.name for tensor in tf.get_default_graph().as_graph_def().node]
for tensor_name in tensor_name_list:
print(tensor_name,'\n')
output:
Placeholder
vgg/conv1/conv1_1/weights
vgg/conv1/conv1_1/weights/read
vgg/conv1/conv1_1/Conv2D
vgg/conv1/conv1_1/BatchNorm/Const
vgg/conv1/conv1_1/BatchNorm/beta
vgg/conv1/conv1_1/BatchNorm/beta/read
vgg/conv1/conv1_1/BatchNorm/Const_1
vgg/conv1/conv1_1/BatchNorm/Const_2
vgg/conv1/conv1_1/BatchNorm/FusedBatchNorm
vgg/conv1/conv1_1/Relu
vgg/pool1/MaxPool
vgg/conv2/conv2_1/weights
vgg/conv2/conv2_1/weights/read
vgg/conv2/conv2_1/Conv2D
vgg/conv2/conv2_1/BatchNorm/Const
vgg/conv2/conv2_1/BatchNorm/beta
vgg/conv2/conv2_1/BatchNorm/beta/read
vgg/conv2/conv2_1/BatchNorm/Const_1
vgg/conv2/conv2_1/BatchNorm/Const_2
vgg/conv2/conv2_1/BatchNorm/FusedBatchNorm
vgg/conv2/conv2_1/Relu
vgg/pool2/MaxPool
vgg/conv3/conv3_1/weights
vgg/conv3/conv3_1/weights/read
vgg/conv3/conv3_1/Conv2D
vgg/conv3/conv3_1/BatchNorm/Const
vgg/conv3/conv3_1/BatchNorm/beta
vgg/conv3/conv3_1/BatchNorm/beta/read
vgg/conv3/conv3_1/BatchNorm/Const_1
vgg/conv3/conv3_1/BatchNorm/Const_2
vgg/conv3/conv3_1/BatchNorm/FusedBatchNorm
vgg/conv3/conv3_1/Relu
vgg/conv3/conv3_2/weights
vgg/conv3/conv3_2/weights/read
vgg/conv3/conv3_2/Conv2D
vgg/conv3/conv3_2/BatchNorm/Const
vgg/conv3/conv3_2/BatchNorm/beta
vgg/conv3/conv3_2/BatchNorm/beta/read
vgg/conv3/conv3_2/BatchNorm/Const_1
vgg/conv3/conv3_2/BatchNorm/Const_2
vgg/conv3/conv3_2/BatchNorm/FusedBatchNorm
vgg/conv3/conv3_2/Relu
vgg/pool3/MaxPool
vgg/conv4/conv4_1/weights
vgg/conv4/conv4_1/weights/read
vgg/conv4/conv4_1/Conv2D
vgg/conv4/conv4_1/BatchNorm/Const
vgg/conv4/conv4_1/BatchNorm/beta
vgg/conv4/conv4_1/BatchNorm/beta/read
vgg/conv4/conv4_1/BatchNorm/Const_1
vgg/conv4/conv4_1/BatchNorm/Const_2
vgg/conv4/conv4_1/BatchNorm/FusedBatchNorm
vgg/conv4/conv4_1/Relu
vgg/conv4/conv4_2/weights
vgg/conv4/conv4_2/weights/read
vgg/conv4/conv4_2/Conv2D
vgg/conv4/conv4_2/BatchNorm/Const
vgg/conv4/conv4_2/BatchNorm/beta
vgg/conv4/conv4_2/BatchNorm/beta/read
vgg/conv4/conv4_2/BatchNorm/Const_1
vgg/conv4/conv4_2/BatchNorm/Const_2
vgg/conv4/conv4_2/BatchNorm/FusedBatchNorm
vgg/conv4/conv4_2/Relu
vgg/pool4/MaxPool
vgg/conv5/conv5_1/weights
vgg/conv5/conv5_1/weights/read
vgg/conv5/conv5_1/Conv2D
vgg/conv5/conv5_1/BatchNorm/Const
vgg/conv5/conv5_1/BatchNorm/beta
vgg/conv5/conv5_1/BatchNorm/beta/read
vgg/conv5/conv5_1/BatchNorm/Const_1
vgg/conv5/conv5_1/BatchNorm/Const_2
vgg/conv5/conv5_1/BatchNorm/FusedBatchNorm
vgg/conv5/conv5_1/Relu
CPT model to pb
"""
此文件可以把ckpt模型转为pb模型
"""
'''
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta/Adam_1
tensor_name: beta1_power
tensor_name: vgg/conv4/conv4_2/BatchNorm/moving_variance
tensor_name: fully_connected/fully_connected/weights/Adam_1
tensor_name: vgg/conv1/conv1_1/BatchNorm/beta/Adam
tensor_name: vgg/conv4/conv4_1/BatchNorm/moving_variance
tensor_name: global_step
tensor_name: vgg/conv3/conv3_1/BatchNorm/beta/Adam_1
tensor_name: vgg/conv3/conv3_1/BatchNorm/beta/Adam
tensor_name: vgg/conv1/conv1_1/weights/Adam_1
tensor_name: vgg/conv1/conv1_1/weights/Adam
tensor_name: beta2_power
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta/Adam
tensor_name: fully_connected/fully_connected/weights/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel
tensor_name: fully_connected/fully_connected/biases
tensor_name: fully_connected/fully_connected/biases/Adam_1
tensor_name: fully_connected/fully_connected/biases/Adam
tensor_name: vgg/conv3/conv3_2/BatchNorm/beta
tensor_name: fully_connected/fully_connected/weights
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam
tensor_name: vgg/conv3/conv3_2/weights/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam
tensor_name: vgg/conv3/conv3_2/weights/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias
tensor_name: vgg/conv3/conv3_2/BatchNorm/moving_mean
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1
tensor_name: stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel
tensor_name: stack_bidirectional_rnn/cell_0/bi
'''
#唯一要改动的就是output_node_names,指定的最后一层输出节点名称,这个是你自己设定的
import tensorflow as tf
#from create_tf_record import *
from tensorflow.python.framework import graph_util
def freeze_graph(input_checkpoint,output_graph):
'''
:param input_checkpoint:
:param output_graph: PB模型保存路径
:return:
'''
# checkpoint = tf.train.get_checkpoint_state(model_folder) #检查目录下ckpt文件状态是否可用
# input_checkpoint = checkpoint.model_checkpoint_path #得ckpt文件路径
# 指定输出的节点名称,该节点名称必须是原模型中存在的节点
# 直接用最后输出的节点,可以在tensorboard中查找到,tensorboard只能在linux中使用
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=True)
graph = tf.get_default_graph() # 获得默认的图
input_graph_def = graph.as_graph_def() # 返回一个序列化的图代表当前的图
with tf.Session() as sess:
saver.restore(sess, input_checkpoint) #恢复图并得到数据
output_graph_def = graph_util.convert_variables_to_constants( # 模型持久化,将变量值固定
sess=sess,
input_graph_def=input_graph_def,# 等于:sess.graph_def
output_node_names=output_node_names.split(","))# 如果有多个输出节点,以逗号隔开
with tf.gfile.GFile(output_graph, "wb") as f: #保存模型
f.write(output_graph_def.SerializeToString()) #序列化输出
print("%d ops in the final graph." % len(output_graph_def.node)) #得到当前图有几个操作节点
#========================================================================================
input_checkpoint='checkpoints/crnn/models-19' #crnn
out_pb_path='checkpoints/frozen_model_crnn.pb'
output_node_names = "vgg/conv5/conv5_1/Relu"
#========================================================================================
freeze_graph(input_checkpoint, out_pb_path)
I hope I can help you,If you have any questions, please comment on this blog or send me a private message. I will reply in my free time.