python批量生成图像标签_批量生成时,Tensorflow会混合图像和标签

So I've been stuck on this problem for weeks. I want to make an image batch from a list of image filenames. I insert the filename list into a queue and use a reader to get the file. The reader then returns the filename and the read image file.

My problem is that when I make a batch using the decoded jpg and the labels from the reader, tf.train.shuffle_batch() mixes up the images and the filenames so that now the labels are in the wrong order for the image files. Is there something I am doing wrong with the queue/shuffle_batch and how can I fix it such that the batch comes out with the right labels for the right files?

Much thanks!

import tensorflow as tf

from tensorflow.python.framework import ops

def preprocess_image_tensor(image_tf):

image = tf.image.convert_image_dtype(image_tf, dtype=tf.float32)

image = tf.image.resize_image_with_crop_or_pad(image, 300, 300)

image = tf.image.per_image_standardization(image)

return image

# original image names and labels

image_paths = ["image_0.jpg", "image_1.jpg", "image_2.jpg", "image_3.jpg", "image_4.jpg", "image_5.jpg", "image_6.jpg", "image_7.jpg", "image_8.jpg"]

labels = [0, 1, 2, 3, 4, 5, 6, 7, 8]

# converting arrays to tensors

image_paths_tf = ops.convert_to_tensor(image_paths, dtype=tf.string, name="image_paths_tf")

labels_tf = ops.convert_to_tensor(labels, dtype=tf.int32, name="labels_tf")

# getting tensor slices

image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)

# getting image tensors from jpeg and performing preprocessing

image_buffer_tf = tf.read_file(image_path_tf, name="image_buffer")

image_tf = tf.image.decode_jpeg(image_buffer_tf, channels=3, name="image")

image_tf = preprocess_image_tensor(image_tf)

# creating a batch of images and labels

batch_size = 5

num_threads = 4

images_batch_tf, labels_batch_tf = tf.train.batch([image_tf, label_tf], batch_size=batch_size, num_threads=num_threads)

# running testing session to check order of images and labels

init = tf.global_variables_initializer()

with tf.Session() as sess:

sess.run(init)

coord = tf.train.Coordinator()

threads = tf.train.start_queue_runners(coord=coord)

print image_path_tf.eval()

print label_tf.eval()

coord.request_stop()

coord.join(threads)

解决方案

Wait.... Isn't your tf usage a little weird?

You are basically running the graph twice by calling:

print image_path_tf.eval()

print label_tf.eval()

And since you are only asking for image_path_tf and label_tf, anything below this line is not even run:

image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)

Maybe try this?

image_paths, labels = sess.run([images_batch_tf, labels_batch_tf])

print(image_paths)

print(labels)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值