VGG16-猫狗分类-imagenet预训练模型

猫狗分类-VGG16-使用imagenet预训练网络

该练习使用VGG16网络架构和imagenet预训练网络。
在卷积基上添加一个密集连接分类器,利用冻结的卷积基端到端地训练模型。
为防止过拟合,网络中使用了L2正则化,0.5的dropout,训练集上使用了数据增强。
在训练时冻结VGG16卷积基。使模型泛化性更好,防止波动过大,破坏先前的模型。

import os
import numpy as np

base_dir = '/home/u/notebook_workspase/datas/dogs-cats-small-dataset'#自己保留的小数据集

train_dir = os.path.join(base_dir,'train')

validation_dir = os.path.join(base_dir,'validation')

test_dir = os.path.join(base_dir,'test')


from keras.applications import VGG16

conv_base = VGG16(weights='imagenet',#模型初始化的权重检查点
                 include_top=False,#不包含全连接层
                 input_shape=(224,224,3))
from keras import models
from keras import layers
from keras import regularizers

model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))#dropout
model.add(layers.Dense(256,kernel_regularizer=regularizers.l2(0.001),#l2 正则化
                       activation='relu'))
model.add(layers.Dense(1,activation='sigmoid'))
conv_base.trainable=False#冻结卷积基
# 查看网络
conv_base.summary()
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_6 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
=================================================================
Total params: 14,714,688
Trainable params: 0
Non-trainable params: 14,714,688
_________________________________________________________________
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 7, 7, 512)         14714688  
_________________________________________________________________
flatten_1 (Flatten)          (None, 25088)             0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 25088)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               6422784   
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 257       
=================================================================
Total params: 21,137,729
Trainable params: 6,423,041
Non-trainable params: 14,714,688
_________________________________________________________________
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers

train_datagen = ImageDataGenerator(
    rescale=1./255,#缩放
    rotation_range=40, # 图像随机旋转的角度范围
    width_shift_range=0.2, # 水平或垂直平移范围,相对总宽度或总高度的比例的比例
    height_shift_range=0.2, 
    shear_range=0.2, # 随机错切变换角度换角度
    zoom_range=0.2, # 随机缩放范围
    horizontal_flip=True, # 一半图像水平翻转
    fill_mode='nearest' # 填充新创建像素的方法
)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
                train_dir, # 目标目录
                target_size=(224, 224), # 所有图像调整为224*224
                batch_size=20,
                class_mode='binary') # 二进制标签
validation_generator = test_datagen.flow_from_directory(
                validation_dir,
                target_size=(224, 224), 
                batch_size=20,
                class_mode='binary')
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
#编译
model.compile(loss='binary_crossentropy',
             optimizer=optimizers.RMSprop(lr = 2e-5),
             metrics=['acc'])
history = model.fit_generator(
    train_generator,
    steps_per_epoch=100,
    epochs=50,
    validation_data=validation_generator,
    validation_steps=50
)
Epoch 1/50
100/100 [==============================] - 239s 2s/step - loss: 1.1099 - acc: 0.6470 - val_loss: 0.9014 - val_acc: 0.8390
Epoch 2/50
100/100 [==============================] - 238s 2s/step - loss: 0.9449 - acc: 0.7680 - val_loss: 0.7722 - val_acc: 0.8740
Epoch 3/50
100/100 [==============================] - 238s 2s/step - loss: 0.8734 - acc: 0.7920 - val_loss: 0.7748 - val_acc: 0.8430
Epoch 4/50
100/100 [==============================] - 238s 2s/step - loss: 0.8174 - acc: 0.8105 - val_loss: 0.6856 - val_acc: 0.8880
Epoch 5/50
100/100 [==============================] - 239s 2s/step - loss: 0.7806 - acc: 0.8315 - val_loss: 0.6395 - val_acc: 0.8950
Epoch 6/50
100/100 [==============================] - 239s 2s/step - loss: 0.7345 - acc: 0.8525 - val_loss: 0.6212 - val_acc: 0.8960
Epoch 7/50
100/100 [==============================] - 239s 2s/step - loss: 0.7090 - acc: 0.8445 - val_loss: 0.6033 - val_acc: 0.9040
Epoch 8/50
100/100 [==============================] - 239s 2s/step - loss: 0.6945 - acc: 0.8625 - val_loss: 0.5763 - val_acc: 0.9050
Epoch 9/50
100/100 [==============================] - 239s 2s/step - loss: 0.6774 - acc: 0.8575 - val_loss: 0.5678 - val_acc: 0.9040
Epoch 10/50
100/100 [==============================] - 239s 2s/step - loss: 0.6672 - acc: 0.8610 - val_loss: 0.5968 - val_acc: 0.8900
Epoch 11/50
100/100 [==============================] - 239s 2s/step - loss: 0.6604 - acc: 0.8600 - val_loss: 0.5428 - val_acc: 0.9100
Epoch 12/50
100/100 [==============================] - 239s 2s/step - loss: 0.6430 - acc: 0.8635 - val_loss: 0.5716 - val_acc: 0.8970
Epoch 13/50
100/100 [==============================] - 239s 2s/step - loss: 0.6312 - acc: 0.8615 - val_loss: 0.5279 - val_acc: 0.9150
Epoch 14/50
100/100 [==============================] - 239s 2s/step - loss: 0.6277 - acc: 0.8670 - val_loss: 0.5233 - val_acc: 0.9140
Epoch 15/50
100/100 [==============================] - 239s 2s/step - loss: 0.6132 - acc: 0.8615 - val_loss: 0.5162 - val_acc: 0.9050
Epoch 16/50
100/100 [==============================] - 239s 2s/step - loss: 0.6047 - acc: 0.8675 - val_loss: 0.5100 - val_acc: 0.9150
Epoch 17/50
100/100 [==============================] - 239s 2s/step - loss: 0.5889 - acc: 0.8765 - val_loss: 0.5014 - val_acc: 0.9180
Epoch 18/50
100/100 [==============================] - 239s 2s/step - loss: 0.5854 - acc: 0.8750 - val_loss: 0.5042 - val_acc: 0.9170
Epoch 19/50
100/100 [==============================] - 239s 2s/step - loss: 0.5871 - acc: 0.8745 - val_loss: 0.4926 - val_acc: 0.9190
Epoch 20/50
100/100 [==============================] - 239s 2s/step - loss: 0.5588 - acc: 0.8865 - val_loss: 0.4884 - val_acc: 0.9180
Epoch 21/50
100/100 [==============================] - 239s 2s/step - loss: 0.5718 - acc: 0.8745 - val_loss: 0.4860 - val_acc: 0.9150
Epoch 22/50
100/100 [==============================] - 239s 2s/step - loss: 0.5784 - acc: 0.8765 - val_loss: 0.4788 - val_acc: 0.9190
Epoch 23/50
100/100 [==============================] - 239s 2s/step - loss: 0.5521 - acc: 0.8770 - val_loss: 0.4775 - val_acc: 0.9180
Epoch 24/50
100/100 [==============================] - 239s 2s/step - loss: 0.5554 - acc: 0.8760 - val_loss: 0.4706 - val_acc: 0.9220
Epoch 25/50
100/100 [==============================] - 239s 2s/step - loss: 0.5683 - acc: 0.8750 - val_loss: 0.4706 - val_acc: 0.9220
Epoch 26/50
100/100 [==============================] - 239s 2s/step - loss: 0.5460 - acc: 0.8790 - val_loss: 0.4859 - val_acc: 0.9040
Epoch 27/50
100/100 [==============================] - 239s 2s/step - loss: 0.5401 - acc: 0.8850 - val_loss: 0.4723 - val_acc: 0.9190
Epoch 28/50
100/100 [==============================] - 239s 2s/step - loss: 0.5377 - acc: 0.8850 - val_loss: 0.4642 - val_acc: 0.9160
Epoch 29/50
100/100 [==============================] - 239s 2s/step - loss: 0.5363 - acc: 0.8765 - val_loss: 0.4973 - val_acc: 0.9060
Epoch 30/50
100/100 [==============================] - 239s 2s/step - loss: 0.5366 - acc: 0.8775 - val_loss: 0.4565 - val_acc: 0.9200
Epoch 31/50
100/100 [==============================] - 239s 2s/step - loss: 0.5351 - acc: 0.8850 - val_loss: 0.4502 - val_acc: 0.9190
Epoch 32/50
100/100 [==============================] - 239s 2s/step - loss: 0.5052 - acc: 0.8880 - val_loss: 0.4466 - val_acc: 0.9210
Epoch 33/50
100/100 [==============================] - 239s 2s/step - loss: 0.5289 - acc: 0.8730 - val_loss: 0.4618 - val_acc: 0.9130
Epoch 34/50
100/100 [==============================] - 239s 2s/step - loss: 0.5102 - acc: 0.8830 - val_loss: 0.4464 - val_acc: 0.9210
Epoch 35/50
100/100 [==============================] - 239s 2s/step - loss: 0.5243 - acc: 0.8760 - val_loss: 0.4431 - val_acc: 0.9170
Epoch 36/50
100/100 [==============================] - 239s 2s/step - loss: 0.5252 - acc: 0.8850 - val_loss: 0.4518 - val_acc: 0.9180
Epoch 37/50
100/100 [==============================] - 239s 2s/step - loss: 0.5110 - acc: 0.8810 - val_loss: 0.4433 - val_acc: 0.9160
Epoch 38/50
100/100 [==============================] - 239s 2s/step - loss: 0.5018 - acc: 0.8885 - val_loss: 0.4265 - val_acc: 0.9230
Epoch 39/50
100/100 [==============================] - 239s 2s/step - loss: 0.4930 - acc: 0.8890 - val_loss: 0.4247 - val_acc: 0.9240
Epoch 40/50
100/100 [==============================] - 239s 2s/step - loss: 0.5022 - acc: 0.8860 - val_loss: 0.4207 - val_acc: 0.9220
Epoch 41/50
100/100 [==============================] - 239s 2s/step - loss: 0.4845 - acc: 0.8840 - val_loss: 0.4237 - val_acc: 0.9220
Epoch 42/50
100/100 [==============================] - 239s 2s/step - loss: 0.4968 - acc: 0.8895 - val_loss: 0.4243 - val_acc: 0.9210
Epoch 43/50
100/100 [==============================] - 239s 2s/step - loss: 0.4886 - acc: 0.8860 - val_loss: 0.4157 - val_acc: 0.9240
Epoch 44/50
100/100 [==============================] - 239s 2s/step - loss: 0.4744 - acc: 0.8880 - val_loss: 0.4385 - val_acc: 0.9120
Epoch 45/50
100/100 [==============================] - 239s 2s/step - loss: 0.4832 - acc: 0.8920 - val_loss: 0.4116 - val_acc: 0.9260
Epoch 46/50
100/100 [==============================] - 239s 2s/step - loss: 0.4846 - acc: 0.8860 - val_loss: 0.4106 - val_acc: 0.9190
Epoch 47/50
100/100 [==============================] - 239s 2s/step - loss: 0.4938 - acc: 0.8830 - val_loss: 0.4505 - val_acc: 0.9090
Epoch 48/50
100/100 [==============================] - 239s 2s/step - loss: 0.4679 - acc: 0.8915 - val_loss: 0.4101 - val_acc: 0.9180
Epoch 49/50
100/100 [==============================] - 239s 2s/step - loss: 0.4786 - acc: 0.8865 - val_loss: 0.4090 - val_acc: 0.9230
Epoch 50/50
100/100 [==============================] - 239s 2s/step - loss: 0.4677 - acc: 0.8965 - val_loss: 0.4016 - val_acc: 0.9250

结果

import matplotlib.pyplot as plt
%matplotlib inline

loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['acc']
val_acc = history.history['val_acc']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = 'Training acc')
plt.plot(epochs,val_acc,'b',label = 'Validation acc')
plt.title('Training and validation accuracy')
plt.legend()#显示标签

plt.figure()

plt.plot(epochs,loss,'bo',label = 'Training loss')
plt.plot(epochs,val_loss,'b',label = 'Validation loss')
plt.title("Training and validation loss")
plt.legend()

plt.show()

在这里插入图片描述

在这里插入图片描述

由于使用了防止过拟合的方法,模型未过拟合,只训练了50轮,感觉还可以增大训练轮数,使模型泛化性更好。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Mr.Ma.01

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值