[Tensorflow] 如何从pb文件生成标准的tensorflow checkpoint文件?

How to generate a tensorflow checkpoint file given a pb file?

Normally I am awkward to find out a desired model is using pb file to store their parameters. There was a time when I thought I could know nothing about the details of the network structure and ‘These researchers are really cunning’.
我看到别人拿pb存模型参数就怂了,因为很长时间我都以为这种存法就是不想让别人知道自己模型的细节,还觉得人家颇为狡猾。

However, this thought is partly wrong. In the worst case where only model parameters in pb format are provided, as shown in this blog and this blog, we still can peek a bit into the model, although some connection information is invisible unless we have access to the network codes.
不过这个想法现在看来不全对也不全错。如果只有一个pb文件,我们还是能窥探出一点东西的,不过网络的连接(那种无关参数的)就不太能知道了,详见blogblog

Here our problem is how to reload a pb parameter file to a standard tensorflow checkpoint file. For recap, this post has shown that pb file is loaded into tf.graph but as constant nodes. Then how can we make it a checkpoint file?
现在的问题是怎么从pb导入成一个ckpt。前情提要请见blog,pb导入进来的都是常量。那么到底怎么整一个ckpt出来?

The idea is to first assign the value of the constant nodes to pre-defined variable nodes, and then save as the normal tf.Saver() approach.
简言之,把常量的值赋值给对应的变量的值就好了。

The following is an example.

from tensorflow.core.framework.graph_pb2 import *
import numpy as np
import tensorflow as tf

deepspeech_prefix = r'model/deepspeech'
newname = 'new'

with tf.Graph().as_default() as graph:
    ref_input_tensor = tf.placeholder(tf.float32,
                                      [batch_size, n_steps if n_steps &g
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值