概述
基于以下五种花的样本(每种花约有样本780张)训练出一个能识别如下五种花的模型,运行在PC上且准确率达90%以上。
任务:训练出一个能识别如下五种花的模型,能够运行在PC上且准确率达90%。
How to implement flowers sample classify?
解决问题
1.获取并分析样本结构
- 图片格式:jpg
- 图片大小不一致
- 图片颜色不一致
- 花在图片中的位置不一致
- 分析问题类型
- 选择方法
- 根据经验
- 借助网络的知识(论文、开源网站等)
- just try and compare
- 主要因素
- 模型大小
- 模型准确率
- 模型运行速度
- 模型运行的内存占用
如果只是想练个手做个demo,那么不考虑因素,直接上可以。但如果后面想做一个正规的项目。这些因素特别重要。 **这些值怎么来,论文后面有,经典的模型网上也有资料说明其准确率,复杂度等。 **运行速度也很重要。比如我们这是一个类似于形色的app。你希望其多久能对一张花进行识别。这就是它的运行速度。
实现部分
图片预处理
由于图片较多不好操作,所以首先将图片存入 tfrecorder
TFRecords TFRecords其实是一种二进制文件,虽然它不如其他格式好理解,但是它能更好的利用内存,更方便复制和移动,并且不需要单独的标签文件(等会儿就知道为什么了)… …总而言之,这样的文件格式好处多多,所以让我们用起来吧。
TFRecords文件包含了tf.train.Example 协议内存块(protocol buffer)(协议内存块包含了字段 Features)。我们可以写一段代码获取你的数据, 将数据填入到Example协议内存块(protocol buffer),将协议内存块序列化为一个字符串, 并且通过tf.python_io.TFRecordWriter 写入到TFRecords文件。
从TFRecords文件中读取数据, 可以使用tf.TFRecordReader的tf.parse_single_example解析器。这个操作可以将Example协议内存块(protocol buffer)解析为张量。
图片预处理代码实现
1.采用直接 resize的方式
#resize images
def resize_image(impath,imagename,(weight,height)):
with tf.Session() as sess:
img = Image.open(impath)
img = img.resize((weight,height))
resize_data = img.tobytes()
#filename = "/home/workdir/tensorflow/flowers/resize/"+imagename
#with tf.gfile.GFile(filename,"wb") as f:
# f.write(resize_image.eval())
return resize_data
def convert_images_to_xy(main_folder_path):
os.chdir(main_folder_path)
foldernames = os.listdir(os.getcwd())
labels = []
images_data = []
#use int index to replace the string lable
index = 0
for folder_name in foldernames:
print 'folder_name = '
print folder_name
os.chdir(main_folder_path + folder_name)
filenames = os.listdir(os.getcwd())
for file_name in filenames:
file_glob_name = os.path.join(main_folder_path, folder_name, file_name)
print file_glob_name
labels.append(index)
resize_image_dat = resize_image(file_glob_name,file_name,(IMAGE_WEIGHT, IMAGE_HEIGHT))
images_data.append(resize_image_dat)
index = index + 1
return images_data, labels
2.采用先pad到固定比例在resize方法 (改方法不好让图片变形,但是图片会有黑边,padding 0导致)
def resize_image(impath,imagename, shape, index):
# img = Image.open(impath)
# img = img.resize((shape[0],shape[1]))
# resize_data = img.tobytes()
global number
# avoid memory leak.
tf.reset_default_graph()
graph = tf.Graph()
with graph.as_default() as g:
with tf.Session(graph = g) as sess:
image_raw_data = tf.gfile.FastGFile(impath, "rb").read()
# 按照jpeg的格式解码图片。
image_data = tf.image.decode_jpeg(image_raw_data)
image_data = tf.image.convert_image_dtype(image_data, dtype=tf.float32)
shape = image_data.eval(session=sess).shape
result = shape[0]
if shape[0] < shape[1]:
result = shape[1]
cropped = tf.image.resize_image_with_crop_or_pad(image_data, result, result)
resize = tf.image.resize_images(cropped, [IMAGE_WEIGHT, IMAGE_WEIGHT])
result = sess.run(resize)
# print("result[50][50]:", result[50][50])
# plt.imshow(result)
# plt.show()
result_raw = result.tostring()
# resize_f = resize / 255.0
# resized = np.asarray(resize.eval(session = sess), dtype='uint8')
print(result.shape, " num is:" ,number)
# resized_data = resized.tobytes()
example = tf.train.Example(features=tf.train.Features(feature={
'image_raw': _bytes_feature(result_raw),
'label': _int64_feature(index)}))
writer.write(example.SerializeToString())
训练网络
1.主要代码解释
#读取数据 tfredcord数据
def read_samples_record(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example, features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
image = tf.decode_raw(features['image_raw'], tf.float32)
print (image)
#reshape the image data
image = tf.reshape(image, [IMAGE_WEIGHT, IMAGE_HEIGHT, IMAGE_CHANEL])
# image = tf.cast(image, tf.float32)
print (image)
label = tf.cast(features['label'], tf.int32)
return image,label
#构建网络
# Model input
X = tf.placeholder(tf.float32, shape=[None, IMAGE_WEIGHT, IMAGE_HEIGHT, IMAGE_CHANEL], name='X')
y_ = tf.placeholder(tf.int32, [None, ], name='y_')
keep_prob = tf.placeholder(tf.float32)
conv1 = tf.layers.conv2d(
inputs=X,
filters=32,
kernel_size=[3, 3],
padding="same",
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
pool1 = tf.layers.max_pooling2d(inputs=conv1,pool_size=[2, 2], strides=2)
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[3, 3],
padding="same",
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
pool2 = tf.layers.max_pooling2d(inputs=conv2,pool_size=[2, 2], strides=2)
conv3 = tf.layers.conv2d(
inputs=pool2,
filters=128,
kernel_size=[3, 3],
padding="same",
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
pool3 = tf.layers.max_pooling2d(inputs=conv3,pool_size=[2, 2], strides=2)
conv4 = tf.layers.conv2d(
inputs=pool3,
filters=256,
kernel_size=[3, 3],
padding="same",
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
pool4 = tf.layers.max_pooling2d(inputs=conv4,pool_size=[2, 2], strides=2)
conv5 = tf.layers.conv2d(
inputs=pool4,
filters=512,
kernel_size=[3, 3],
padding="same",
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
#pool5 = tf.layers.max_pooling2d(inputs=conv5,pool_size=[2, 2], strides=2)
pool5 = tf.layers.average_pooling2d(inputs=conv5, pool_size=[4, 4], strides = 4)
print("pool5 shape is :", pool5.get_shape(), " width:", pool5.get_shape()[1])
re1 = tf.reshape(pool5, [-1, int(pool5.get_shape()[1]) * int(pool5.get_shape()[1]) * 512])
dense1 = tf.layers.dense(inputs=re1,
units=512,
activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.005))
drop1 = tf.nn.dropout(dense1, keep_prob)
logits = tf.layers.dense(inputs=drop1,
units=5,
activation=None,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.005))
#定义代价函数
loss = tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=logits)
train = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits, 1), tf.int32), y_)
#准确率
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#用于保存训练好的模型
saver = tf.train.Saver()
#开始训练模型
for step in range(TRAIN_STEP+1):
end_index = min(start_index + TRAIN_BATCH_SIZE, X_train.shape[0])
#print (start_index,end_index)
batch_xs = X_train[start_index:end_index, ]
batch_ys = y_train[start_index:end_index, ]
#print (batch_xs.shape,batch_ys.shape)
sess.run(train, feed_dict={X: batch_xs, y_: batch_ys, keep_prob:0.5})
if step % 50 == 0:
print ("step %d " % (step))
result = sess.run(merged, feed_dict={
X: batch_xs, y_: batch_ys, keep_prob:1.0})
TrainWriter.add_summary(result, step)
val_num = X_val.shape[0]
val_result = sess.run(merged, feed_dict={
X: X_val, y_: y_val,keep_prob:1.0})
ValWriter.add_summary(val_result, step)
print("train accuracy: %g" % sess.run(acc, feed_dict={X: batch_xs, y_: batch_ys, keep_prob: 1.0}))
accuracy = sess.run(acc, feed_dict={X: X_test, y_: y_test, keep_prob:1.0})
print("test accuracy %g" % accuracy)
if accuracy > 0.8:
saver.save(sess, "save_path/cnn_flower_float.module")
if end_index >= X_train.shape[0]:
start_index = 0
else:
start_index = end_index
saver.save(sess, "save_path/cnn_flower.module")
sess.close()
数据增强
如果第一种方式训练的结果不是很好,可以采用数据增强的方式来提高准确率。
1.将原有数据扩大5倍
#resize images
def resize_image(impath,imagename, fold_name):
strName = imagename.split('.')
save_name = str(TF_filename_prefix) + str(fold_name) + '\\' + str(strName[0])
if not (os.path.exists((str(TF_filename_prefix) + str(fold_name)))):
os.makedirs((str(TF_filename_prefix) + str(fold_name)))
global number
# avoid memory leak.
tf.reset_default_graph()
graph = tf.Graph()
images = []
with graph.as_default() as g:
with tf.Session(graph = g) as sess:
image_raw_data = tf.gfile.FastGFile(impath, "rb").read()
# 按照jpeg的格式解码图片。
image_data = tf.image.decode_jpeg(image_raw_data)
print("int :", image_data.eval(session=sess)[50][50])
#image_data = tf.image.convert_image_dtype(image_data, dtype=tf.float32)
#print("float :", image_data.eval(session=sess)[50][50])
central_result = tf.image.central_crop(image_data, 0.7)
# plt.imshow(central_result.eval(session=sess))
# plt.show()
left_right = tf.image.flip_left_right(image_data)
# plt.imshow(left_right.eval(session=sess))
# plt.show()
contrast = tf.image.adjust_contrast(image_data,0.2)
hue = tf.image.adjust_hue(image_data,delta = 0.2)
brightness = tf.image.adjust_brightness(image_data, delta=32./255.)
image_central = tf.image.encode_jpeg(central_result)
images.append(image_central)
image_left_right = tf.image.encode_jpeg(left_right)
images.append(image_left_right)
image_contrast = tf.image.encode_jpeg(contrast)
images.append(image_contrast)
image_hue= tf.image.encode_jpeg(hue)
images.append(image_hue)
image_brightness = tf.image.encode_jpeg(brightness)
images.append(image_brightness)
# 保存图片
i = 0
for img in images:
with tf.gfile.GFile(save_name + "_%d.jpg" %i , 'wb') as f:
f.write(sess.run(img))
i = i + 1
2.训练模型
#读取数据方式跟之前有些差异,因为测试数据和训练数据保存到了两个 tfrecorder文件。
def read_test_record(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example, features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
image = tf.decode_raw(features['image_raw'], tf.float32)
print(image)
# reshape the image data
image = tf.reshape(image, [IMAGE_WEIGHT, IMAGE_HEIGHT, IMAGE_CHANEL])
# image = tf.cast(image, tf.float32)
print(image)
label = tf.cast(features['label'], tf.int32)
return image, label
def read_train_data(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example, features={
'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
})
image = tf.decode_raw(features['image_raw'], tf.float32)
image = tf.reshape(image, [IMAGE_WEIGHT, IMAGE_HEIGHT, IMAGE_CHANEL])
label = tf.cast(features['label'], tf.int32)
img_batch, lab_batch = tf.train.shuffle_batch([image, label], batch_size=BATCH_SIZE, capacity=17500, min_after_dequeue=3000)
#img_batch, lab_batch = tf.train.batch([image, label], batch_size=BATCH_SIZE, capacity=800, num_threads=4)
return img_batch, lab_batch
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(TEST_IMAGE_COUNT):
img, label_result = sess.run([test_image, test_label])
#print(img[50][50])
# plt.imshow(img)
# plt.show()
test_images.append(img)
test_labels.append(label_result)
coord.request_stop()
coord.join(threads)
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
step = 0
for i in range(TRAIN_STEP):
img_batch, lab_batch = sess.run([image_batch, label_batch])
#print("lab is:" , lab_batch.shape, " lable value:", lab_batch[0:50])
sess.run(train, feed_dict={X: img_batch, y_: lab_batch, keep_prob: 0.5})
#print("loss is:", _loss)
if step % 50 == 0:
print("step %d " % (step))
result = sess.run(merged, feed_dict={
X: img_batch, y_: lab_batch, keep_prob: 1.0})
TrainWriter.add_summary(result, step)
# val_result = sess.run(merged, feed_dict={
# X: test_images[0:100], y_: test_labels[0:100], keep_prob: 1.0})
print("train accuracy: %g" % sess.run(acc, feed_dict={X: img_batch, y_: lab_batch, keep_prob: 1.0}))
accuracy = sess.run(acc, feed_dict={X: test_images[0:200], y_: test_labels[0:200], keep_prob: 1.0})
print("test accuracy %g" % accuracy)
if accuracy > 0.7:
accruacy1 = sess.run(acc, feed_dict={X: test_images[200:400], y_: test_labels[200:400], keep_prob: 1.0})
accruacy2 = sess.run(acc,feed_dict={X: test_images[400:600], y_: test_labels[400:600], keep_prob: 1.0})
accruacy3 = sess.run(acc,feed_dict={X: test_images[600:800], y_: test_labels[600:800], keep_prob: 1.0})
print("all accruacy is :", (accuracy+accruacy1+accruacy2+accruacy3)/4.)
saver.save(sess, "save_path/cnn_flower_float_add.module")
step += 1
print("test accuracy %g" % sess.run(acc, feed_dict={
X: test_images[0:100], y_: test_labels[0:100], keep_prob: 1.0}))
coord.request_stop()
coord.join(threads)
#训练部分的代码跟之前相同就不重复了。
Fine tuning
如果觉得数据增强依然不能提高准确率,可以考虑使用数据迁移的方式,因为数据较少,这种方式可以节约训练时间,提高准确率。
# -*- coding: utf-8 -*-
"""
卷积神经网络 Inception-v3模型 迁移学习
"""
import glob
import os.path
import random
import numpy as np
import tensorflow as tf
from tensorflow.python.platform import gfile
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
# inception-v3 模型瓶颈层的节点个数
BOTTLENECK_TENSOR_SIZE = 2048
# inception-v3 模型中代表瓶颈层结果的张量名称
BOTTLENECK_TENSOR_NAME = 'pool_3/_reshape:0'
# 图像输入张量所对应的名称
JPEG_DATA_TENSOR_NAME = 'DecodeJpeg/contents:0'
# 下载的谷歌训练好的inception-v3模型文件目录
MODEL_DIR = 'D:\\workspace\\PycharmSpace\\flow_classify\\com\\shuan\\flower_classify\\fine_train'
# 下载的谷歌训练好的inception-v3模型文件名
MODEL_FILE = 'tensorflow_inception_graph.pb'
# 保存训练数据通过瓶颈层后提取的特征向量
CACHE_DIR = 'tmp/bottleneck'
# 图片数据的文件夹
INPUT_DATA = 'D:\\workspace\\PycharmSpace\\flow_classify\\com\\shuan\\flower_classify\\flower_photos_780'
# 验证的数据百分比
VALIDATION_PERCENTAGE = 10
# 测试的数据百分比
TEST_PERCENTACE = 10
# 定义神经网路的设置
LEARNING_RATE = 0.01
STEPS = 4000
BATCH = 128
# 不知道大家有注意到没有,数据集里给的是不同大小的图片,而程序里却可以直接送入Inception-v3模型,从而得到同样尺寸的结果特征向量。我在书籍的github上问了这个问题,得到的回复是:Inception-v3模型中包含了图像预处理和大小调整的部分。
# 这个函数把数据集分成训练,验证,测试三部分
def create_image_lists(testing_percentage, validation_percentage):
"""
这个函数把数据集分成训练,验证,测试三部分
:param testing_percentage:测试的数据百分比 10
:param validation_percentage:验证的数据百分比 10
:return:
"""
result = {}
# 获取目录下所有子目录
sub_dirs = [x[0] for x in os.walk(INPUT_DATA)]
# ['/path/to/flower_data', '/path/to/flower_data\\daisy', '/path/to/flower_data\\dandelion',
# '/path/to/flower_data\\roses', '/path/to/flower_data\\sunflowers', '/path/to/flower_data\\tulips']
# 数组中的第一个目录是当前目录,这里设置标记,不予处理
is_root_dir = True
for sub_dir in sub_dirs: # 遍历目录数组,每次处理一种
if is_root_dir:
is_root_dir = False
continue
# 获取当前目录下所有的有效图片文件
extensions = ['jpg', 'jepg', 'JPG', 'JPEG']
file_list = []
dir_name = os.path.basename(sub_dir) # 返回路径名路径的基本名称,如:daisy|dandelion|roses|sunflowers|tulips
for extension in extensions:
file_glob = os.path.join(INPUT_DATA, dir_name, '*.' + extension) # 将多个路径组合后返回
file_list.extend(glob.glob(file_glob)) # glob.glob返回所有匹配的文件路径列表,extend往列表中追加另一个列表
if not file_list: continue
# 通过目录名获取类别名称
label_name = dir_name.lower() # 返回其小写
# 初始化当前类别的训练数据集、测试数据集、验证数据集
training_images = []
testing_images = []
validation_images = []
for file_name in file_list: # 遍历此类图片的每张图片的路径
base_name = os.path.basename(file_name) # 路径的基本名称也就是图片的名称,如:102841525_bd6628ae3c.jpg
# 随机讲数据分到训练数据集、测试集和验证集
chance = np.random.randint(100)
if chance < validation_percentage:
validation_images.append(base_name)
elif chance < (testing_percentage + validation_percentage):
testing_images.append(base_name)
else:
training_images.append(base_name)
result[label_name] = {
'dir': dir_name,
'training': training_images,
'testing': testing_images,
'validation': validation_images
}
return result
# 这个函数通过类别名称、所属数据集和图片编号获取一张图片的地址
def get_image_path(image_lists, image_dir, label_name, index, category):
"""
:param image_lists:所有图片信息
:param image_dir:根目录 ( 图片特征向量根目录 CACHE_DIR | 图片原始路径根目录 INPUT_DATA )
:param label_name:类别的名称( daisy|dandelion|roses|sunflowers|tulips )
:param index:编号
:param category:所属的数据集( training|testing|validation )
:return: 一张图片的地址
"""
# 获取给定类别的图片集合
label_lists = image_lists[label_name]
# 获取这种类别的图片中,特定的数据集(base_name的一维数组)
category_list = label_lists[category]
mod_index = index % len(category_list) # 图片的编号%此数据集中图片数量
# 获取图片文件名
base_name = category_list[mod_index]
sub_dir = label_lists['dir']
# 拼接地址
full_path = os.path.join(image_dir, sub_dir, base_name)
return full_path
# 图片的特征向量的文件地址
def get_bottleneck_path(image_lists, label_name, index, category):
return get_image_path(image_lists, CACHE_DIR, label_name, index, category) + '.txt' # CACHE_DIR 特征向量的根地址
# 计算特征向量
def run_bottleneck_on_image(sess, image_data, image_data_tensor, bottleneck_tensor):
"""
:param sess:
:param image_data:图片内容
:param image_data_tensor:
:param bottleneck_tensor:
:return:
"""
# 由于输入的图片大小不一致,此处得到的image_data大小也不一致(已验证),但却都能通过加载的inception-v3模型生成一个2048的特征向量。具体原理不详。
bottleneck_values = sess.run(bottleneck_tensor, {image_data_tensor: image_data})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values
# 获取一张图片对应的特征向量的路径
def get_or_create_bottleneck(sess, image_lists, label_name, index, category, jpeg_data_tensor, bottleneck_tensor):
"""
:param sess:
:param image_lists:
:param label_name:类别名
:param index:图片编号
:param category:
:param jpeg_data_tensor:
:param bottleneck_tensor:
:return:
"""
label_lists = image_lists[label_name]
sub_dir = label_lists['dir']
sub_dir_path = os.path.join(CACHE_DIR, sub_dir) # 到类别的文件夹
if not os.path.exists(sub_dir_path): os.makedirs(sub_dir_path)
bottleneck_path = get_bottleneck_path(image_lists, label_name, index, category) # 获取图片特征向量的路径
if not os.path.exists(bottleneck_path): # 如果不存在
# 获取图片原始路径
image_path = get_image_path(image_lists, INPUT_DATA, label_name, index, category)
# 获取图片内容
image_data = gfile.FastGFile(image_path, 'rb').read()
# 计算图片特征向量
bottleneck_values = run_bottleneck_on_image(sess, image_data, jpeg_data_tensor, bottleneck_tensor)
# 将特征向量存储到文件
bottleneck_string = ','.join(str(x) for x in bottleneck_values)
with open(bottleneck_path, 'w') as bottleneck_file:
bottleneck_file.write(bottleneck_string)
else:
# 读取保存的特征向量文件
with open(bottleneck_path, 'r') as bottleneck_file:
bottleneck_string = bottleneck_file.read()
# 字符串转float数组
bottleneck_values = [float(x) for x in bottleneck_string.split(',')]
return bottleneck_values
# 随机获取一个batch的图片作为训练数据(特征向量,类别)
def get_random_cached_bottlenecks(sess, n_classes, image_lists, how_many, category, jpeg_data_tensor,
bottleneck_tensor):
"""
:param sess:
:param n_classes: 类别数量
:param image_lists:
:param how_many: 一个batch的数量
:param category: 所属的数据集
:param jpeg_data_tensor:
:param bottleneck_tensor:
:return: 特征向量列表,类别列表
"""
bottlenecks = []
ground_truths = []
for _ in range(how_many):
# 随机一个类别和图片编号加入当前的训练数据
label_index = random.randrange(n_classes)
label_name = list(image_lists.keys())[label_index] # 随机图片的类别名
image_index = random.randrange(65536) # 随机图片的编号
bottleneck = get_or_create_bottleneck(sess, image_lists, label_name, image_index, category, jpeg_data_tensor,
bottleneck_tensor) # 计算此图片的特征向量
ground_truth = np.zeros(n_classes, dtype=np.float32)
ground_truth[label_index] = 1.0
bottlenecks.append(bottleneck)
ground_truths.append(ground_truth)
return bottlenecks, ground_truths
# 获取全部的测试数据
def get_test_bottlenecks(sess, image_lists, n_classes, jpeg_data_tensor, bottleneck_tensor):
bottlenecks = []
ground_truths = []
label_name_list = list(image_lists.keys()) # ['dandelion', 'daisy', 'sunflowers', 'roses', 'tulips']
for label_index, label_name in enumerate(label_name_list): # 枚举每个类别,如:0 sunflowers
category = 'testing'
for index, unused_base_name in enumerate(image_lists[label_name][category]): # 枚举此类别中的测试数据集中的每张图片
bottleneck = get_or_create_bottleneck(
sess, image_lists, label_name, index, category, jpeg_data_tensor, bottleneck_tensor)
ground_truth = np.zeros(n_classes, dtype=np.float32)
ground_truth[label_index] = 1.0
bottlenecks.append(bottleneck)
ground_truths.append(ground_truth)
return bottlenecks, ground_truths
def main(_):
image_lists = create_image_lists(TEST_PERCENTACE, VALIDATION_PERCENTAGE)
n_classes = len(image_lists.keys())
# 读取模型
with gfile.FastGFile(os.path.join(MODEL_DIR, MODEL_FILE), 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# 加载模型,返回对应名称的张量
bottleneck_tensor, jpeg_data_tensor = tf.import_graph_def(graph_def, return_elements=[BOTTLENECK_TENSOR_NAME,
JPEG_DATA_TENSOR_NAME])
# 输入
bottleneck_input = tf.placeholder(tf.float32, [None, BOTTLENECK_TENSOR_SIZE], name='BottleneckInputPlaceholder')
ground_truth_input = tf.placeholder(tf.float32, [None, n_classes], name='GroundTruthInput')
# 全连接层
with tf.name_scope('final_training_ops'):
weights = tf.Variable(tf.truncated_normal([BOTTLENECK_TENSOR_SIZE, n_classes], stddev=0.001))
biases = tf.Variable(tf.zeros([n_classes]))
logits = tf.matmul(bottleneck_input, weights) + biases
final_tensor = tf.nn.softmax(logits)
# 损失
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=ground_truth_input)
cross_entropy_mean = tf.reduce_mean(cross_entropy)
# 优化
train_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cross_entropy_mean)
# 正确率
with tf.name_scope('evaluation'):
correct_prediction = tf.equal(tf.argmax(final_tensor, 1), tf.argmax(ground_truth_input, 1))
evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
# 初始化参数
init = tf.global_variables_initializer()
sess.run(init)
# TensorBoard log目录
# log_dir = 'inception_log'
# if not os.path.exists(log_dir):
# os.makedirs(log_dir)
# tf.import_graph_def(graph_def, name='')
# writer = tf.summary.FileWriter(log_dir, sess.graph)
# writer.close()
test_bottlenecks, test_ground_truth = get_test_bottlenecks(sess, image_lists, n_classes, jpeg_data_tensor,
bottleneck_tensor)
for i in range(STEPS):
# 每次获取一个batch的训练数据
train_bottlenecks, train_ground_truth = get_random_cached_bottlenecks(sess, n_classes, image_lists, BATCH,
'training', jpeg_data_tensor,
bottleneck_tensor)
# 训练
sess.run(train_step,
feed_dict={bottleneck_input: train_bottlenecks, ground_truth_input: train_ground_truth})
# 验证
if i % 50 == 0 or i + 1 == STEPS:
validation_bottlenecks, validation_ground_truth = get_random_cached_bottlenecks(sess, n_classes,
image_lists, BATCH,
'validation',
jpeg_data_tensor,
bottleneck_tensor)
validation_accuracy = sess.run(evaluation_step, feed_dict={bottleneck_input: validation_bottlenecks,
ground_truth_input: validation_ground_truth})
print('Step %d: Validation accuracy on random sampled %d examples = %.1f%%' % (
i, BATCH, validation_accuracy * 100))
if validation_accuracy > 0.95:
# 测试
test_accuracy = sess.run(evaluation_step,
feed_dict={bottleneck_input: test_bottlenecks,
ground_truth_input: test_ground_truth})
print('Final test accuracy = %.1f%%' % (test_accuracy * 100))
if __name__ == '__main__':
tf.app.run()
实验结果
一、clasify_flower_float预测数据的准确率最好在80%左右。
1.图片处理:
a.图片使用先pad后resize保证图片不会发生形变。 b.图片数据进行归一化到[0,1)。
2.模型修改:
a.所有的卷积使用 3 * 3。 b.增加一个卷积。 c.最后的pooling使用avg_pooling,pool_size=[4, 4], strides=4 (可以大大减少Size)。 d.去掉一个全连接 剩余两个神经元个数为 512 和 5 。 e.增加一个dropout 百分比为0.6
二、flower_classify_with_data_enhance准确率最好在 90%左右
1.数据增强
a.5倍大小。3900*5 = 19500 (中心剪裁,左右旋转、色彩变换) b.取17200个作为训练样本,2000个作为预测样本。
2.模型与上面基本相似。
三、fine_tuning_inception_v3准备率最好在 96% 左右。google inception_v3 进行 fine_train