Keras实现预训练网络VGG16迁移学习——CIFAR10分类【80行代码训练&预测&评估】

16 篇文章 1 订阅





一、简介

CIFAR-10由10类中的60000张32x32彩色图像组成,每类有6000张图像。有50000张训练图像和10000张测试图像。
以下是数据集中的类,以及每个中的10个随机图像:
在这里插入图片描述
本例达到的验证集准确度为71.50%,最新准确率见:Classification datasets results,State-Of-Art准确率为Fractional Max-Pooling的96.53%。




二、训练代码

from keras import optimizers
from keras import applications
from keras.models import Model
from keras.datasets import cifar10
from keras.models import Sequential
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint
from keras.layers import Dropout, Flatten, Dense

# 数据集
(x_train, y_train), (x_test, y_test) = cifar10.load_data()  # 从Keras自带数据集中读取cifar10
x_train = x_train.astype('float32') / 255  # 50000条0-255浮点数的ndarray,分辨率32x32,缩放成0-1
x_test = x_test.astype('float32') / 255  # 10000条
y_train = to_categorical(y_train, 10)  # 独热编码
y_test = to_categorical(y_test, 10)

# 定义模型
base_model = applications.VGG16(
    weights="imagenet", include_top=False,
    input_shape=(32, 32, 3))  # 预训练的VGG16网络,替换掉顶部网络

for layer in base_model.layers[:15]:
    layer.trainable = False  # 冻结预训练网络前15层,最后的卷积神经网络块可训练

top_model = Sequential()  # 自定义顶层网络
top_model.add(Flatten(input_shape=base_model.output_shape[1:]))  # 将预训练网络展平
top_model.add(Dense(32, activation='relu'))  # 全连接层,输入像素32
top_model.add(Dropout(0.5))  # Dropout概率0.5
top_model.add(Dense(10, activation='softmax'))  # 输出层,十分类

model = Model(
    inputs=base_model.input,
    outputs=top_model(base_model.output))  # 新网络=预训练网络+自定义网络

model.compile(
    loss='categorical_crossentropy',
    optimizer=optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True),
    metrics=['accuracy'])  # 损失函数为分类交叉熵,优化器为SGD

print(model.summary())

# 训练&保存
checkpointer = ModelCheckpoint(
    filepath='cifar10.h5', verbose=1, save_best_only=True)  # 保存最优模型

model.fit(
    x=x_train,
    y=y_train,
    batch_size=32,
    epochs=10,
    verbose=1,
    callbacks=[checkpointer],
    validation_split=0.1,
    shuffle=True)




三、训练结果

Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 26s 569us/step - loss: 1.6481 - acc: 0.4057 - val_loss: 1.4043 - val_acc: 0.5536

Epoch 00001: val_loss improved from inf to 1.40427, saving model to cifar10.h5
Epoch 2/100
45000/45000 [==============================] - 25s 545us/step - loss: 1.2214 - acc: 0.6018 - val_loss: 1.0850 - val_acc: 0.6180

Epoch 00002: val_loss improved from 1.40427 to 1.08503, saving model to cifar10.h5
Epoch 3/100
45000/45000 [==============================] - 24s 539us/step - loss: 1.1095 - acc: 0.6406 - val_loss: 0.9111 - val_acc: 0.7030

Epoch 00003: val_loss improved from 1.08503 to 0.91108, saving model to cifar10.h5
Epoch 4/100
45000/45000 [==============================] - 24s 533us/step - loss: 1.0278 - acc: 0.6705 - val_loss: 0.8933 - val_acc: 0.7036

Epoch 00004: val_loss improved from 0.91108 to 0.89325, saving model to cifar10.h5
Epoch 5/100
45000/45000 [==============================] - 24s 526us/step - loss: 0.9678 - acc: 0.6872 - val_loss: 0.9133 - val_acc: 0.7106

Epoch 00005: val_loss did not improve from 0.89325
Epoch 6/100
45000/45000 [==============================] - 24s 523us/step - loss: 0.9295 - acc: 0.6996 - val_loss: 0.8640 - val_acc: 0.7150

Epoch 00006: val_loss improved from 0.89325 to 0.86402, saving model to cifar10.h5
Epoch 7/100
45000/45000 [==============================] - 24s 522us/step - loss: 0.8776 - acc: 0.7161 - val_loss: 0.9938 - val_acc: 0.6930

Epoch 00007: val_loss did not improve from 0.86402
Epoch 8/100
45000/45000 [==============================] - 24s 522us/step - loss: 0.8501 - acc: 0.7264 - val_loss: 0.9021 - val_acc: 0.7180

Epoch 00008: val_loss did not improve from 0.86402
Epoch 9/100
45000/45000 [==============================] - 24s 525us/step - loss: 0.7992 - acc: 0.7415 - val_loss: 0.8725 - val_acc: 0.7274

Epoch 00009: val_loss did not improve from 0.86402
Epoch 10/100
45000/45000 [==============================] - 24s 525us/step - loss: 0.7816 - acc: 0.7477 - val_loss: 0.8637 - val_acc: 0.7368

模型训练到第六代已经收敛,原数据集的像素太小了

得到模型

在这里插入图片描述




四、预测代码

import time
import random
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import cifar10
from keras.models import load_model
from keras.utils import to_categorical

# 数据集
(_, _), (x_test, y_test) = cifar10.load_data()  # 从Keras自带数据集中读取cifar10
x_test = x_test.astype('float32') / 255  # 10000条
y_test = to_categorical(y_test, 10)

# 加载模型
start = time.clock()
model = load_model('cifar10.h5')
print('Warming up took {}s'.format(time.clock() - start))

# 随机数
index = random.randint(0, x_test.shape[0]-1)
x = x_test[index]
y = y_test[index]

# 显示该图片
label_dict = {0: 'airplane', 1: 'automobile', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog', 6: 'frog', 7: 'horse',
              8: 'ship', 9: 'truck'}
plt.imshow(x)
plt.title("{} {}".format(index, label_dict[np.argmax(y)]))
plt.show()

# 预测
x = x[None]
predict = model.predict(x)
# predict = np.argmax(predict)#取最大值的位置

print('index', index)
print('original:', label_dict[np.argmax(y)])
print('predicted:', label_dict[np.argmax(predict)])

# top5置信度
for i in np.argsort(predict[0])[::-1][:5]:
    print('{}:{:.2f}%'.format(label_dict[i], predict[0][i] * 100))

# 测试集损失与准确率
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=0)
print('test_loss:{:.2} test_acc:{}%'.format(test_loss, test_acc * 100))

Warming up took 2.458817515168419s
index 7459
original: ship
predicted: ship
ship:94.78%
airplane:3.64%
automobile:1.13%
truck:0.33%
bird:0.09%
test_loss:0.9 test_acc:73.08%




五、IPython

import time
import random
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import cifar10
from keras.models import load_model
from keras.utils import to_categorical

# 数据集
(_, _), (x_test,y_test) = cifar10.load_data()  #从Keras自带数据集中读取cifar10
x_test = x_test.astype('float32')/255  #10000条
y_test = to_categorical(y_test, 10)

# 加载模型
start = time.clock()
model = load_model('cifar10.h5')
print('Warming up took {}s'.format(time.clock() - start))
Warming up took 2.814203360716422s
# 随机数
index = random.randint(0, x_test.shape[0]-1)
x = x_test[index]
y = y_test[index]

# 显示该图片
label_dict={0:'airplane',1:'automobile',2:'bird',3:'cat',4:'deer',5:'dog',6:'frog',7:'horse',8:'ship',9:'truck'}
plt.imshow(x)
plt.title("{} {}".format(index,label_dict[np.argmax(y)]))
plt.show()

# 预测
x = x[None]
predict = model.predict(x)
#predict = np.argmax(predict)#取最大值的位置

print('index', index)
print('original:', label_dict[np.argmax(y)])
print('predicted:', label_dict[np.argmax(predict)])

# top5置信度
print('\nTOP5:')
for i in np.argsort(predict[0])[::-1][:5]:
    print('{}:{:.2f}%'.format(label_dict[i],predict[0][i]*100))

在这里插入图片描述

index: 2693
original: truck
predicted: truck

TOP5:
truck:93.84%
automobile:5.30%
ship:0.73%
airplane:0.13%
cat:0.00%
# 测试集损失与准确率
test_loss, test_acc = model.evaluate(x_test,y_test)
print('test_loss:{:.2} test_acc:{}%'.format(test_loss,test_acc*100))
10000/10000 [==============================] - 11s 1ms/step
test_loss:0.9 test_acc:73.08%
  • 3
    点赞
  • 49
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

XerCis

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值