depthwise separable convolutions-cifar10

depthwise separable convolutions-cifar10

深度可分离卷积

数据预处理

from keras.datasets import cifar10

(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()

train_images = train_images.astype('float32')/255 #归一化
test_images = test_images.astype('float32')/255 #归一化(便于计算)
from keras.utils import to_categorical

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

网络模型

from keras.models import Sequential
from keras import layers
from keras import models
from keras import regularizers

height = 32
width = 32
channels = 3
num_classes = 10

model = Sequential()
model.add(layers.SeparableConv2D(32,3,
                                activation='relu',
                                input_shape=(height,width,channels,)))
model.add(layers.SeparableConv2D(64,3,activation='relu'))
model.add(layers.MaxPooling2D(2))

model.add(layers.SeparableConv2D(64,3,activation='relu'))
model.add(layers.SeparableConv2D(128,3,activation='relu'))
model.add(layers.MaxPooling2D(2))

model.add(layers.SeparableConv2D(64,3,activation='relu'))
model.add(layers.SeparableConv2D(128,3,activation='relu'))
model.add(layers.GlobalAveragePooling2D())

model.add(layers.Dense(64,kernel_regularizer=regularizers.l2(0.001),activation='relu'))
model.add(layers.Dense(num_classes,activation='softmax'))
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
separable_conv2d_19 (Separab (None, 30, 30, 32)        155       
_________________________________________________________________
separable_conv2d_20 (Separab (None, 28, 28, 64)        2400      
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 14, 14, 64)        0         
_________________________________________________________________
separable_conv2d_21 (Separab (None, 12, 12, 64)        4736      
_________________________________________________________________
separable_conv2d_22 (Separab (None, 10, 10, 128)       8896      
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 5, 5, 128)         0         
_________________________________________________________________
separable_conv2d_23 (Separab (None, 3, 3, 64)          9408      
_________________________________________________________________
separable_conv2d_24 (Separab (None, 1, 1, 128)         8896      
_________________________________________________________________
global_average_pooling2d_4 ( (None, 128)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_6 (Dense)              (None, 10)                650       
=================================================================
Total params: 43,397
Trainable params: 43,397
Non-trainable params: 0
_________________________________________________________________

编译,训练

model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['accuracy'])
history=model.fit(train_images, train_labels,
              batch_size=64,
              epochs=30,
              validation_data=(test_images, test_labels))
Train on 50000 samples, validate on 10000 samples
Epoch 1/30
50000/50000 [==============================] - 55s 1ms/step - loss: 2.1080 - acc: 0.2068 - val_loss: 2.0347 - val_acc: 0.2527
Epoch 2/30
50000/50000 [==============================] - 54s 1ms/step - loss: 1.8210 - acc: 0.3416 - val_loss: 1.8258 - val_acc: 0.3528
Epoch 3/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.6695 - acc: 0.3963 - val_loss: 1.6073 - val_acc: 0.4165
Epoch 4/30
50000/50000 [==============================] - 54s 1ms/step - loss: 1.5518 - acc: 0.4392 - val_loss: 1.7985 - val_acc: 0.3648
Epoch 5/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.4558 - acc: 0.4778 - val_loss: 1.5189 - val_acc: 0.4659
Epoch 6/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.3682 - acc: 0.5099 - val_loss: 1.3968 - val_acc: 0.5019
Epoch 7/30
50000/50000 [==============================] - 54s 1ms/step - loss: 1.2880 - acc: 0.5439 - val_loss: 1.2768 - val_acc: 0.5473
Epoch 8/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.2158 - acc: 0.5712 - val_loss: 1.3117 - val_acc: 0.5317
Epoch 9/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.1546 - acc: 0.5962 - val_loss: 1.1642 - val_acc: 0.5890
Epoch 10/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.1006 - acc: 0.6162 - val_loss: 1.1359 - val_acc: 0.6010
Epoch 11/30
50000/50000 [==============================] - 55s 1ms/step - loss: 1.0569 - acc: 0.6306 - val_loss: 1.1469 - val_acc: 0.6008
Epoch 12/30
50000/50000 [==============================] - 56s 1ms/step - loss: 1.0155 - acc: 0.6472 - val_loss: 1.1828 - val_acc: 0.5906
Epoch 13/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.9785 - acc: 0.6602 - val_loss: 1.0786 - val_acc: 0.6274
Epoch 14/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.9451 - acc: 0.6738 - val_loss: 1.1295 - val_acc: 0.6134
Epoch 15/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.9146 - acc: 0.6826 - val_loss: 1.0857 - val_acc: 0.6313
Epoch 16/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.8863 - acc: 0.6943 - val_loss: 1.1071 - val_acc: 0.6329
Epoch 17/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.8613 - acc: 0.7044 - val_loss: 1.3088 - val_acc: 0.5782
Epoch 18/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.8367 - acc: 0.7130 - val_loss: 1.0693 - val_acc: 0.6475
Epoch 19/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.8156 - acc: 0.7185 - val_loss: 0.9846 - val_acc: 0.6686
Epoch 20/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7955 - acc: 0.7279 - val_loss: 0.9773 - val_acc: 0.6718
Epoch 21/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7727 - acc: 0.7353 - val_loss: 0.9748 - val_acc: 0.6790
Epoch 22/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7552 - acc: 0.7411 - val_loss: 0.9738 - val_acc: 0.6698
Epoch 23/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7388 - acc: 0.7458 - val_loss: 1.0716 - val_acc: 0.6505
Epoch 24/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7227 - acc: 0.7530 - val_loss: 1.0271 - val_acc: 0.6609
Epoch 25/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.7042 - acc: 0.7593 - val_loss: 0.9979 - val_acc: 0.6742
Epoch 26/30
50000/50000 [==============================] - 56s 1ms/step - loss: 0.6893 - acc: 0.7642 - val_loss: 0.9553 - val_acc: 0.6859
Epoch 27/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.6724 - acc: 0.7711 - val_loss: 1.0292 - val_acc: 0.6791
Epoch 28/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.6616 - acc: 0.7747 - val_loss: 1.0015 - val_acc: 0.6760
Epoch 29/30
50000/50000 [==============================] - 55s 1ms/step - loss: 0.6461 - acc: 0.7795 - val_loss: 1.2812 - val_acc: 0.6179
Epoch 30/30
50000/50000 [==============================] - 56s 1ms/step - loss: 0.6362 - acc: 0.7835 - val_loss: 1.0326 - val_acc: 0.6789

结果可视化

import matplotlib.pyplot as plt
%matplotlib inline

loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['acc']
val_acc = history.history['val_acc']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = 'Training acc')
plt.plot(epochs,val_acc,'b',label = 'Validation acc')
plt.title('Training and validation accuracy')
plt.legend()#显示标签

plt.figure()

plt.plot(epochs,loss,'bo',label = 'Training loss')
plt.plot(epochs,val_loss,'b',label = 'Validation loss')
plt.title("Training and validation loss")
plt.legend()

plt.show()

在这里插入图片描述

在这里插入图片描述

scores = model.evaluate(test_images, test_labels, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
10000/10000 [==============================] - 6s 557us/step
Test loss: 1.0326023689270019
Test accuracy: 0.6789

保存模型

model.save('cifar10-depthwise-separable-convolutions.h5')
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Mr.Ma.01

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值