Inception网络结构案例-花的分类

1、Inception介绍:
由Google团队提出,因此也被称为GoogleNet;
该模型一共有4个版本,从V1到V4;
通过增加网络的宽度(通道数)来提升训练效果。
2、Inception V1网络结构
Inception V1模块由多个Inception基础模块串联而成。
在这里插入图片描述
3、数据准备与处理:用于本案例的训练集共有952张图像,验证集共有408张图像。
读入数据ImageDataGenerator

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
from keras.preprocessing.image import ImageDataGenerator
IMSIZE=224
train_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(
    './data/data_inception/train/',
    target_size=(IMSIZE, IMSIZE),
    batch_size=50,
    class_mode='categorical')
validation_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(
    './data/data_inception/test/',
    target_size=(IMSIZE, IMSIZE),
    batch_size=50,
    class_mode='categorical')

4、展示图像

from matplotlib import pyplot as plt
plt.figure()
fig,ax = plt.subplots(2,5)
fig.set_figheight(7)
fig.set_figwidth(15)
ax=ax.flatten()
X,Y=next(train_generator)
for i in range(10): ax[i].imshow(X[i,:,:,:])

在这里插入图片描述
5、Inception V1代码实现
导入concatenate函数,该函数用于实现同一层中,不同尺寸卷积核的并联操作

from keras.layers import Conv2D, BatchNormalization, MaxPooling2D
from keras.layers import Flatten, Dropout, Dense, Input, concatenate
from keras import Model

#输入:224*224*3
input_layer = Input([IMSIZE,IMSIZE,3]) 
x = input_layer

#第一层:使用7x7的卷积核(滑动IMSIZE = 224,步长2,same padding),64通道;然后经过3x3的max pooling(步长为2)
x = Conv2D(64,(7,7),strides=(2,2),padding='same',activation='relu')(x) #para=(3*7*7+1)*64=9472
x = BatchNormalization(axis=3)(x) #para=4*64=256
x = MaxPooling2D(pool_size=(3,3),strides=(2,2),padding='same')(x) 

#第二层:使用3x3的卷积核(滑动步长为1,same padding),192通道;然后经过3x3的max pooling(步长为2)
x = Conv2D(192,(3,3),strides=(1,1),padding='same',activation='relu')(x) #para=(64*3*3+1)*192=110784
x = BatchNormalization(axis=3)(x) #para=4*192=768
x = MaxPooling2D(pool_size=(3,3),strides=(2,2),padding='same')(x) 

for i in range(9):
    #第三层:Inception 3a。后面的3b,4a-4e,5a,5b都用3a近似替代
    ##64个1x1的卷积核
    branch1x1 = Conv2D(64,(1,1),strides=(1,1),padding='same',activation='relu')(x) #para=(192*1*1+1)*64=12352
    branch1x1 = BatchNormalization(axis=3)(branch1x1) #para=4*64=256
    ##96个1x1的卷积核,再进行128个3x3的卷积
    branch3x3 = Conv2D(96,(1,1),strides=(1,1),padding='same',activation='relu')(x) #para=(192*1*1+1)*96=18528
    branch3x3 = BatchNormalization(axis=3)(branch3x3) #para=4*96=384
    branch3x3 = Conv2D(128,(3,3),strides=(1,1),padding='same',activation='relu')(branch3x3) #para=(96*3*3+1)*128=110720
    branch3x3 = BatchNormalization(axis=3)(branch3x3) #para=4*128=512
    ##16个1x1的卷积核,再进行32个5x5的卷积
    branch5x5 = Conv2D(16,(1,1),strides=(1,1),padding='same',activation='relu')(x) #para=(192*1*1+1)*16=3088
    branch5x5 = BatchNormalization(axis=3)(branch5x5) #para=4*16=64
    branch5x5 = Conv2D(32,(5,5),strides=(1,1),padding='same',activation='relu')(branch5x5) #para=(16*5*5+1)*32=12832
    branch5x5 = BatchNormalization(axis=3)(branch5x5) #para=4*32=128
    ##pool层,使用3x3的核,输出28x28x192,然后进行32个1x1的卷积
    branchpool = MaxPooling2D(pool_size=(3,3),strides=(1,1),padding='same')(x)
    branchpool = Conv2D(32,(1,1),strides=(1,1),padding='same',activation='relu')(branchpool) #para=(192*1*1+1)*32=6176
    branchpool = BatchNormalization(axis=3)(branchpool) #para=4*32=128
    x = concatenate([branch1x1,branch3x3,branch5x5,branchpool],axis=3)
    x = MaxPooling2D(pool_size=(3,3),strides=(2,2),padding='same')(x)


x = Dropout(0.4)(x)
x = Flatten()(x)
x = Dense(17,activation='softmax')(x)
output_layer=x
model=Model(input_layer,output_layer)
model.summary()

6、Inception V1编译运行
训练集的准确率达到了93.84%,验证集的准确率提高到了48.53%。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值