GoogLeNet中Inception的实现

运行环境:tensorflow2.1+python3.7

代码如下:

import tensorflow as tf
from tensorflow.keras.layers import Dense,Flatten,Conv2D
from tensorflow.keras.layers import MaxPooling2D,GlobalAveragePooling2D,Add,Concatenate
from tensorflow.keras import Input,Model

"""
#------------------------------------------------------------
            GoogLeNet中的Inception模块
#------------------------------------------------------------
"""
def inception_1(inputs,filters):
    filter1,filter2,filter3,filter4=filters
    
    x1=Conv2D(filter1,(1,1),padding='same',activation='relu')(inputs)
    
    x2=Conv2D(96,(1,1),padding='same',activation='relu')(inputs)
    x2=Conv2D(filter2,(3,3),padding='same',activation='relu')(x2)
    
    x3=Conv2D(16,(1,1),padding='same',activation='relu')(inputs)
    x3=Conv2D(filter3,(5,5),padding='same',activation='relu')(x3)
    
    x4=MaxPooling2D((3,3),strides=(1,1),padding='same')(inputs)
    x4=Conv2D(filter4,(1,1),activation='relu')(x4)
    
    x=Concatenate()([x1,x2,x3,x4])
    return x

inputs=Input([28,28,192])

outputs=inception_1(inputs,[64,128,32,32])

model=Model(inputs,outputs)
model.summary()


"""
#add与concatenate的区别
input1=Input([18,])
x1=Dense(8,activation='relu')(input1)
input2=Input([25,])
x2=Dense(8,activation='relu')(input2)
#Add层是对channel数相同的层进行内容的加
#Concatenate层是对相同size,不同channel的层进行堆叠
outputs=Add()([x1,x2])
model=Model([input1,input2],outputs)
model.summary()

结果如下:

"""
Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            [(None, 28, 28, 192) 0                                            
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 28, 28, 96)   18528       input_2[0][0]                    
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 28, 28, 16)   3088        input_2[0][0]                    
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 28, 28, 192)  0           input_2[0][0]                    
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 28, 28, 64)   12352       input_2[0][0]                    
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 28, 28, 128)  110720      conv2d_7[0][0]                   
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 28, 28, 32)   12832       conv2d_9[0][0]                   
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 28, 28, 32)   6176        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 28, 28, 256)  0           conv2d_6[0][0]                   
                                                                 conv2d_8[0][0]                   
                                                                 conv2d_10[0][0]                  
                                                                 conv2d_11[0][0]                  
==================================================================================================
Total params: 163,696
Trainable params: 163,696
Non-trainable params: 0
__________________________________________________________________________________________________
"""

从上述结构中,可以看到先执行1*1的卷积层,降低输入通道数,以及maxpooling层操作;之后再对每一层进行之后的操作;最后concatenate起来。

参考资料:

https://blog.csdn.net/qq_27464321/article/details/81254920

https://zhuanlan.zhihu.com/p/32702031

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值