CNN模型复现1 AlexNet

1. AlexNet

1.1 要点

  1. AlexNet加入‘relu’激活函数、Batchnormalization。 AlexNet有5个卷基层和3个全连接层。
  2. 输入图片尺寸[224,224,3],第一层用96个尺寸为11x11x3,步长为4的卷机核提取特征,然后标准化和用池化核为3,步长为2的最大池化进行池化。
  3. 第二层把96个特征分开训练,用256个5x5x48的卷机核提取特征,然后标准化和用池化核为3,步长为2的最大池化进行池化。
  4. 第三层用384个3x3x256的卷机核提取特征。
  5. 第四层用384个3x3x192的卷机核提取特征。
  6. 第五层用256个3x3x192的卷机核提取特征,然后标准化和用池化核为3,步长为2的最大池化进行池化。
  7. 然后对特征层展平,进行两次神经元为4096的全连接,一次Dropout,最后进行一次神经元为1000的全连接。

1.2 网络结构

  1. 224,224,3–>C(96,11*11,4,‘s’,‘relu’,[56, 56, 96])BM(3,2,[27, 27, 96])–>
  2. 27,27,96–>C(256,5*5,1,‘s’,‘relu’,[27, 27, 256])BM(3,2,[13, 13, 256])–>
  3. 13,13,256–>C(384,3*3,1,s,‘relu’,[13, 13, 384])–>
  4. 13,13,384–>C(384,3*3,1,s,‘relu’,[13, 13, 384])–>
  5. 13,13,384–>C(256,3*3,1,s,‘relu’,[13, 13, 256])BM(,[6, 6, 256])–>
  6. 6,6,256–>F([9216])D(4096,‘relu’,[4096])D(4096,‘relu’,[4096])D(0.5)D(1000,‘safamax’,[1000])–>
  7. 1000

在这里插入图片描述

1.3 代码

(1) 代码流程

  1. 导入函数
  2. 设置tensor
  3. 建立第一层
  4. 2~5层卷积块儿
  5. 中间的2个全连接层
  6. 最后输出的全连接层
  7. 建立model(pip install pydot)

Conv(f=96,k=11,s=4,p='s')
Relu
Batchnorm
MaxPool(k=3,s=2) 
   ↓
Conv(f=256,k=5,s=1,p='s')
Relu
Batchnorm
MaxPool(k=3,s=2) 
   ↓
Conv(f=384,k=3,s=1,p='s')
Relu 
   ↓
Conv(f=384,k=3,s=1,p='s')
Relu 
   ↓
Conv(f=256,k=3,s=1,p='s')
Relu
Batchnorm
MaxPool(k=3,s=2) 
   ↓
Flatten
Fc(4096)
Relu 
Fc(4096)
Relu 
Dropout(0.5)
   ↓
Fc(1000)
SoftMax

(2) python代码

'''代码用BatchNormalization代替LRN(Local Response Normalization)'''
# 1. Imports
from keras import Model
from keras.utils import plot_model
from keras.layers import Input,Conv2D,BatchNormalization,MaxPool2D,Flatten,Dense,Dropout

# 2. Tensors and layers
input = Input(shape=(224,224,3)) # Input() == tf.placeholder()

# 3. 1st convolutional layer kernel 96[11x11x3]
x = Conv2D(filters=96,kernel_size=11,strides=4,padding='same',activation='relu')(input)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=3,strides=2)(x)

# 4. 2nd convolutional layer  kernel 256[5x5x48]
x = Conv2D(filters=256,kernel_size=5,strides=1,padding='same',activation='relu')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=3,strides=2)(x)

# 5. 3rd convolutional layer  kernel 384[3x3x256]
x = Conv2D(filters=384,kernel_size=3,strides=1,padding='same',activation='relu')(x)

# 6. 4th convolutional layer   kernel 384[3x3x192]
x = Conv2D(filters=384,kernel_size=3,strides=1,padding='same',activation='relu')(x)

# 7. 5th convolutional layer    kernel 256[3x3x192]
x = Conv2D(filters=256,kernel_size=3,strides=1,padding='same',activation='relu')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=3,strides=2)(x)

# 8. Densen layer
x = Flatten()(x)
x = Dense(units=4096,activation='relu')(x)
x = Dense(units=4096,activation='relu')(x)
x = Dropout(rate=0.5)(x)

# 9. Output layer
output = Dense(units=1000,activation='softmax')(x)

# 10. Model
model = Model(inputs=input,outputs=output)
model.summary()
plot_model(model,show_shapes = True)

_________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 56, 56, 96)        34944     
_________________________________________________________________
batch_normalization_1 (Batch (None, 56, 56, 96)        384       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 27, 27, 96)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 27, 27, 256)       614656    
_________________________________________________________________
batch_normalization_2 (Batch (None, 27, 27, 256)       1024      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 256)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 13, 13, 384)       885120    
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 13, 13, 384)       1327488   
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 13, 13, 256)       884992    
_________________________________________________________________
batch_normalization_3 (Batch (None, 13, 13, 256)       1024      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 256)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 4096)              37752832  
_________________________________________________________________
dense_2 (Dense)              (None, 4096)              16781312  
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 1000)              4097000   
=================================================================
Total params: 62,380,776
Trainable params: 62,379,560
Non-trainable params: 1,216
_________________________________________________________________

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值