tensorflow冻结图转saved_model
Tensorflow有冻结图的protobuffer格式文件,这一个文件存储了图结构,并且将训练过程的变量转化为常量融入到其中。本文给出了将冻结图(frozen_inference_graph.pb)转化为saved_model格式的code,方便部署tf serving。
# tf.__version__=1.12
from tensorflow.python.platform import gfile
import tensorflow as tf
import os,sys
config = tf.ConfigProto(allow_soft_placement=True)
sess = tf.Session(config=config)
with gfile.FastGFile('path_to/frozen_inference_graph.pb', 'rb') as f: # 加载冻结图模型文件
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
tf.import_graph_def(graph_def, name='') # 导入图定义
sess.run(tf.global_variables_initializer())
# 建立tensor info bundle
input_img = tf.saved_model.utils.build_tensor_info(sess.graph.get_tensor_by_name('ImageTensor:0'))
output = tf.saved_model.utils.build_tensor_info(sess.graph.get_tensor_by_name('SemanticPredictions:0'))
print(output)
export_path_base = "export_path"
export_path = os.path.join(tf.compat.as_bytes(export_path_base), tf.compat.as_bytes('1'))
# Export model with signature
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'inputs': input_img},
outputs={'outputs': output},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'a_signature':
prediction_signature
},
main_op=tf.tables_initializer())
builder.save()
运行上述代码后,将在export_path
下面生成如下的文件夹结构:
.
└── 1
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
就可以利用tf serving将这个文件夹部署为http/gRPC service。
tf serving可以用docker启动,具体方法本文不再赘述。