文章目录
前言
多标签分类是分类任务中一张图片可对应多个标签的情况,如一张村庄路口的监控抽帧图像,应当具有马路和村庄两个标签
一、关于迁移学习
迁移学习指将从源数据集学到的知识迁移到目标数据集
二、多标签分类示例代码
1.引入库
导入需要的库
from keras import backend
import cv2
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.applications.vgg16 import VGG16
from keras.models import Model
from os import listdir
import numpy as np
from numpy import zeros, asarray, savez_compressed
from pandas import read_csv
import matplotlib as plt
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import preprocess_input
from keras.callbacks import ModelCheckpoint
2.模型
逐行解释如下:
- (in_shape=(128, 128, 3), out_shape=6) 图像像素默认为128x128像素,3个颜色通道,输出类别为6
- 'vgg16_weights.h5’是下载的预训练权重文件,放在文件同目录下就行,VGG16权重文件
- base_model这里不包含顶部的全连接层,然后指定了输入的形状
- 循环遍历了 base_model 中的所有层,并将它们的 trainable 属性设置为 False,这样在训练过程中这些层的权重将不会被更新
- 设置微调层tune_layers,允许一些层的权重将在训练过程中被更新,而其他层的权重将保持不变
- flat1 将去除了顶部全连接层的VGG模型(上面的base_model)的输出展平为一维向量
- class1 定义一个全连接层,接收展平后的输出作为输入
- output 输出层的神经元数设置为类别的数量
- 组合成一个新模型model
- 设置优化器,学习率,损失函数,评估指标
- model.summary() 可以打印出模型的信息,这里截一小部分
def define_model(in_shape=(128, 128, 3), out_shape=6):
local_path = 'vgg16_weights.h5'
base_model = VGG16(weights=local_path, include_top=False, input_shape=in_shape)
for layer in base_model.layers:
layer.trainable = False
tune_layers = [layer.name for layer in base_model.layers if layer.name.startswith('block5_')]
for layer_name in tune_layers:
base_model.get_layer(layer_name).trainable = True
flat1 = Flatten()(base_model.layers[-1].output)
class1 = Dense(128, activation='relu', kernel_initializer='he_uniform')(flat1)
output = Dense(out_shape, activation='sigmoid')(class1)
model = Model(inputs=base_model.input, outputs=output)
opt = Adam(learning_rate=1e-3)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=[fbeta])
model.summary()
return model
3.创建标签与整数之间的映射
这里涉及到标签的格式,后面用到的时候再说
def create_tag_mapping(mapping_csv):
labels = set()
for i in range(len(mapping_csv)):
tags = mapping_csv['tags'][i].split(' ')
labels.update(tags)
labels = sorted(list(labels))
labels_map = {labels[i]:i for i in range(len(labels))}
inv_labels_map = {i:labels[i] for i in range(len(labels))}
return labels_map, inv_labels_map
4.创建一个映射字典
将图片及其标签对应起来,比如image1.jpg的标签为马路和室外,则键image1对应的值为 [‘马路’, ‘室外’]
def create_file_mapping(mapping_csv):
mapping = dict()
for i in range(len(mapping_csv)):
name, tags = mapping_csv['image_name'][i], mapping_csv['tags'][i]
mapping[name] = tags.split(' ')
return mapping
5.进行独热编码
如果一共有三个标签 [马路,室外,室内],当image1.jpg的标签为马路和室外时,其独热编码返回的是[1,1,0]
def one_hot_encode(tags, mapping):
encoding = zeros(len(mapping), dtype='uint8')
for tag in tags: encoding[mapping[tag]] = 1
print(encoding)
return encoding
6.加载图像数据集,并进行基本的预处理
返回值是图像数据集 X 和每张图像对应的独热编码
def load_dataset(path, file_mapping, tag_mapping):
photos, targets = list(), list()
target_size = (128, 128)
for filename in listdir(path):
photo = cv2.imread(path + filename)
img = cv2.resize(photo, target_size)
photo = np.array(img, dtype='uint8')
tags = file_mapping[filename[:-4]] # get tags
target = one_hot_encode(tags, tag_mapping) # one hot encode tags
photos.append(photo)
targets.append(target)
X = asarray(photos, dtype='uint8')
y = asarray(targets, dtype='uint8')
return X, y
7.处理以备后续模型使用
调用上面的方法,将加载的图像数据 X 和其对应的标签 y 保存为一个压缩后的.npz文件
filename = 'train_v2.csv' # load the target file
mapping_csv = read_csv(filename)
tag_mapping, _ = create_tag_mapping(mapping_csv) # create a mapping of tags to integers
file_mapping = create_file_mapping(mapping_csv) # create a mapping of filenames to tag lists
folder = 'train-jpg/' # load the jpeg images
X, y = load_dataset(folder, file_mapping, tag_mapping)
print(X.shape, y.shape)
savez_compressed('planet_data.npz', X, y) # save both arrays to one file in compressed format
8.分割训练集和测试集
def load_dataset():
data = np.load('planet_data.npz')
X, y = data['arr_0'], data['arr_1']
trainX, testX, trainY, testY = train_test_split(X, y, test_size=0.3, random_state=1)
print(trainX.shape, trainY.shape, testX.shape, testY.shape)
return trainX, trainY, testX, testY
9.绘制学习过程曲线的函数
def summarize_diagnostics(history):
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(history.history['fbeta'], label='Training Accuracy')
plt.plot(history.history['val_fbeta'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.tight_layout()
plt.show()
10.数据加载和增强+训练模型+评估模型并保存
- 加载数据并进行预处理和增强操作,包括图像翻转等,然后设置批次大小为128
- model = define_model() 实例化一个网络模型,训练该模型并使用函数ModelCheckpoint保存最佳权重
- 评估模型,可以计算准确率,召回率等等参数
trainX, trainY, testX, testY = load_dataset()
train_datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=90, preprocessing_function=preprocess_input)
test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_it = train_datagen.flow(trainX, trainY, batch_size=128)
test_it = test_datagen.flow(testX, testY, batch_size=128)
model = define_model()
checkpointer = ModelCheckpoint(filepath='./weights.best.vgg16.hdf5', verbose=1, save_best_only=True)
history = model.fit_generator(train_it, steps_per_epoch=len(train_it), validation_data=test_it, validation_steps=len(test_it), \
epochs=15, callbacks=[checkpointer], verbose=0)
model.load_weights('./weights.best.vgg16.hdf5')
loss, fbeta = model.evaluate_generator(test_it, steps=len(test_it), verbose=0)
print('> loss=%.3f, fbeta=%.3f' % (loss, fbeta))
model.save('final_model.h5')
summarize_diagnostics(history)
迁移学习的手段有很多,上述用到微调(fine-tune),调整了几个中间层的参数和最顶层的参数。