tensorflow不知代码情况下获取graph的placeholder和output

Here is my way to find out the correct placeholder op name in the Graphdef part of the .meta file:

saver = tf.train.import_meta_graph('some_path/model.ckpt.meta')
imported_graph = tf.get_default_graph()
graph_op = imported_graph.get_operations()
with open('output.txt', 'w') as f:
    for i in graph_op:
        f.write(str(i))

In the output.txt file, we can easily find out the placeholder’s correct op names and other attrs. Here is part of my output file:

name: “input/input_image”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: -1
}
dim {
size: 112
}
dim {
size: 112
}
dim {
size: 3
}
}
}
}
Obviously, in my tensorflow version(1.6), the correct placeholder op name is Placeholder. Now return back to mrry’s solution. Use [x for x in tf.get_default_graph().get_operations() if x.type == “Placeholder”] to get a list of all the placeholder ops.

Thus it’s easy and convenient to perform the inference operation with only the ckpt files without needing to reconstruct the model. For example:

input_x = … # prepare the model input


saver = tf.train.import_meta_graph('some_path/model.ckpt.meta')
graph_x = tf.get_default_graph().get_tensor_by_name('input/input_image:0')
graph_y = tf.get_default_graph().get_tensor_by_name('layer19/softmax:0')
sess = tf.Session()
saver.restore(sess, 'some_path/model.ckpt')

output_y = sess.run(graph_y, feed_dict={graph_x: input_x})
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用 TensorFlow 实现 FaceNet 人脸识别的代码: ```python import tensorflow as tf import numpy as np import cv2 class FaceNet: def __init__(self, model_path): self.graph = tf.Graph() with self.graph.as_default(): sess_config = tf.ConfigProto() sess_config.gpu_options.allow_growth = True self.sess = tf.Session(config=sess_config) self.sess.run(tf.global_variables_initializer()) saver = tf.train.import_meta_graph(model_path + '.meta') saver.restore(self.sess, model_path) self.images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0") self.embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") self.phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0") self.embedding_size = self.embeddings.get_shape()[1] def prewhiten(self, x): mean = np.mean(x) std = np.std(x) std_adj = np.maximum(std, 1.0 / np.sqrt(x.size)) y = np.multiply(np.subtract(x, mean), 1 / std_adj) return y def l2_normalize(self, x, axis=-1, epsilon=1e-10): output = x / np.sqrt(np.maximum(np.sum(np.square(x), axis=axis, keepdims=True), epsilon)) return output def calc_embeddings(self, images): prewhiten_images = [] for image in images: prewhiten_images.append(self.prewhiten(image)) feed_dict = {self.images_placeholder: prewhiten_images, self.phase_train_placeholder: False} embeddings = self.sess.run(self.embeddings, feed_dict=feed_dict) embeddings = self.l2_normalize(embeddings) return embeddings def calc_distance(self, feature1, feature2): return np.sum(np.square(feature1 - feature2)) def compare(self, image1, image2): feature1 = self.calc_embeddings([image1])[0] feature2 = self.calc_embeddings([image2])[0] distance = self.calc_distance(feature1, feature2) return distance if __name__ == '__main__': model_path = 'model/20180402-114759/model-20180402-114759.ckpt-275' facenet = FaceNet(model_path) img1 = cv2.imread('img1.jpg') img2 = cv2.imread('img2.jpg') distance = facenet.compare(img1, img2) print('Distance between img1 and img2:', distance) ``` 在运行代码之前,需要下载 FaceNet 模型。可以从 [这里](https://github.com/davidsandberg/facenet/tree/master/src/models) 下载预训练模型。将下载的模型文件夹放到代码中 `model_path` 的位置即可。 代码中定义了一个 `FaceNet` 类,通过 `calc_embeddings` 方法可以计算图像的 embedding 特征向量。然后通过 `calc_distance` 方法计算两幅图像的距离,最后得到的值越小说明两幅图像越相似。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值