经典卷积网络---LeNet、AlexNet、VGGNet、InceptionNet、ResNet [北京大学曹健老师人工智能学习笔记]

LeNet—卷积神经网络的开篇之作


Yann LeCun于1998年提出,通过共享卷积核减少了网络的参数。
LeNet有2个卷积层和3个全连接层
在特征提取阶段,卷积层以外的标准化、池化、激活等都看做是卷积层的附属,不计入层数统计。
卷积层: 6个5×5的卷积核,步长1,不使用全零填充
池化层: 2×2最大值池化,步长2,不使用全零填充

代码中只有Model类继承这里不同,完整代码参考

class LeNet(Model):
    def __init__(self):
        super(LeNet,self).__init__()  
        self.c1=tf.keras.layers.Conv2D(filters=6,kernel_size=(5,5),padding="valid",input_shape=(32,32,3))
        #该时代还未有批标准化
        self.a1=tf.keras.layers.Activation("sigmoid")
        self.p1=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="valid")
        #未有dropout
        
        self.c2=tf.keras.layers.Conv2D(filters=6,kernel_size=(5,5),padding="valid",input_shape=(32,32,3))
        #该时代还未有批标准化
        self.a2=tf.keras.layers.Activation("sigmoid")
        self.p2=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="valid")
        
        self.flatten=tf.keras.layers.Flatten()
        self.f1=tf.keras.layers.Dense(120,activation="sigmoid")
        self.f2=tf.keras.layers.Dense(84,activation="sigmoid")
        self.f3=tf.keras.layers.Dense(10,activation="softmax")
        
    def call(self, x):#重写call,实现自己的前向传播
        x = self.c1(x)
        x = self.a1(x)
        x = self.p1(x)
        
        x = self.c2(x)
        x = self.a2(x)
        x = self.p2(x)
        
        x = self.flatten(x)
        x = self.f1(x)
        x = self.f2(x)
        y = self.f3(x)
        return y


AlexNet

诞生于2012年,是Hinton的代表作之一,使用relu激活函数,提高了训练速度,使用dropout[分类识别阶段]缓解了过拟合

两层3×3卷积和一层5×5卷积输出的特征图大小相同,数据比较多时,2层3×3的计算更少

AlexNet有5个卷积层和3个全连接层,共8层

class AlexNet(Model):
    def __init__(self):
        super(AlexNet,self).__init__()  
        self.c1=tf.keras.layers.Conv2D(filters=96,kernel_size=(3,3),padding="valid",input_shape=(32,32,3))
        self.b1=tf.keras.layers.BatchNormalization()   #原文为LRN,这里BN代替(更主流)
        self.a1=tf.keras.layers.Activation("relu")
        self.p1=tf.keras.layers.MaxPool2D(pool_size=(3,3),strides=2)
        #未有dropout
        
        self.c2=tf.keras.layers.Conv2D(filters=256,kernel_size=(3,3),padding="valid")
        self.b2=tf.keras.layers.BatchNormalization()   
        self.a2=tf.keras.layers.Activation("relu")
        self.p2=tf.keras.layers.MaxPool2D(pool_size=(3,3),strides=2)
       
        self.c3=tf.keras.layers.Conv2D(filters=384,kernel_size=(3,3),padding="same")
        self.b3=tf.keras.layers.BatchNormalization()   
        self.a3=tf.keras.layers.Activation("relu")
        
        self.c4=tf.keras.layers.Conv2D(filters=384,kernel_size=(3,3),padding="same")
        self.b4=tf.keras.layers.BatchNormalization()   
        self.a4=tf.keras.layers.Activation("relu")
        
        self.c5=tf.keras.layers.Conv2D(filters=256,kernel_size=(3,3),padding="same")
        self.b5=tf.keras.layers.BatchNormalization()   
        self.a5=tf.keras.layers.Activation("relu")
        self.p5=tf.keras.layers.MaxPool2D(pool_size=(3,3),strides=2)
        
        
        self.flatten=tf.keras.layers.Flatten()
        self.f1=tf.keras.layers.Dense(2048,activation="relu")
        self.d1=tf.keras.layers.Dropout(0.5)
        self.f2=tf.keras.layers.Dense(2048,activation="relu")
        self.d2=tf.keras.layers.Dropout(0.5)
        self.f3=tf.keras.layers.Dense(10,activation="softmax")
        
    def call(self, x):#重写call,实现自己的前向传播
        x = self.c1(x)
        x = self.b1(x)
        x = self.a1(x)
        x = self.p1(x)
        
        x = self.c2(x)
        x = self.b2(x)
        x = self.a2(x)
        x = self.p2(x)
        
        x = self.c3(x)
        x = self.b3(x)
        x = self.a3(x)
  
        x = self.c4(x)
        x = self.b4(x)
        x = self.a4(x)
        
        x = self.c5(x)
        x = self.b5(x)
        x = self.a5(x)
        x = self.p5(x)
        
        
        x = self.flatten(x)
        x = self.f1(x)
        x = self.d1(x)
        x = self.f2(x)
        x = self.d2(x)
        y = self.f3(x)
        return y



VGGNet

诞生于2014年,使用小尺寸卷积核,网络结构规整,适合硬件加速

这里层数越高,卷积核的数目越多。越靠后特征图尺寸越小,通过增加卷积核的个数,增加了特征图的深度,保证了特征图的信息承载能力

16层13层卷积,3层全连接

class VGGNet(Model):
    def __init__(self):
        super(VGGNet,self).__init__()  
        self.c1=tf.keras.layers.Conv2D(filters=64,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b1=tf.keras.layers.BatchNormalization()
        self.a1=tf.keras.layers.Activation("relu")
        
        self.c2=tf.keras.layers.Conv2D(filters=64,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b2=tf.keras.layers.BatchNormalization()
        self.a2=tf.keras.layers.Activation("relu")
        self.p2=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="same")
        self.d2=tf.keras.layers.Dropout(0.2)
        
        
        
        self.c3=tf.keras.layers.Conv2D(filters=128,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b3=tf.keras.layers.BatchNormalization()
        self.a3=tf.keras.layers.Activation("relu")
        
        self.c4=tf.keras.layers.Conv2D(filters=128,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b4=tf.keras.layers.BatchNormalization()
        self.a4=tf.keras.layers.Activation("relu")
        self.p4=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="same")
        self.d4=tf.keras.layers.Dropout(0.2)
        
        
        
        self.c5=tf.keras.layers.Conv2D(filters=256,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b5=tf.keras.layers.BatchNormalization()
        self.a5=tf.keras.layers.Activation("relu")
        
        self.c6=tf.keras.layers.Conv2D(filters=256,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b6=tf.keras.layers.BatchNormalization()
        self.a6=tf.keras.layers.Activation("relu")
        
        self.c7=tf.keras.layers.Conv2D(filters=256,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b7=tf.keras.layers.BatchNormalization()
        self.a7=tf.keras.layers.Activation("relu")
        self.p7=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="same")
        self.d7=tf.keras.layers.Dropout(0.2)
        
        
        self.c8=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b8=tf.keras.layers.BatchNormalization()
        self.a8=tf.keras.layers.Activation("relu")
        
        self.c9=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b9=tf.keras.layers.BatchNormalization()
        self.a9=tf.keras.layers.Activation("relu")
        
        self.c10=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b10=tf.keras.layers.BatchNormalization()
        self.a10=tf.keras.layers.Activation("relu")
        self.p10=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="same")
        self.d10=tf.keras.layers.Dropout(0.2)
        
        
        self.c11=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b11=tf.keras.layers.BatchNormalization()
        self.a11=tf.keras.layers.Activation("relu")
        
        self.c12=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b12=tf.keras.layers.BatchNormalization()
        self.a12=tf.keras.layers.Activation("relu")
        
        self.c13=tf.keras.layers.Conv2D(filters=512,kernel_size=(3,3),padding="same",input_shape=(32,32,3))
        self.b13=tf.keras.layers.BatchNormalization()
        self.a13=tf.keras.layers.Activation("relu")
        self.p13=tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding="same")
        self.d13=tf.keras.layers.Dropout(0.2)
        
        
        
        
        
        self.flatten=tf.keras.layers.Flatten()
        self.f1=tf.keras.layers.Dense(512,activation="relu")
        self.d14=tf.keras.layers.Dropout(0.2)
        self.f2=tf.keras.layers.Dense(512,activation="relu")
        self.d15=tf.keras.layers.Dropout(0.2)
        self.f3=tf.keras.layers.Dense(10,activation="softmax")
        
    def call(self, x):#重写call,实现自己的前向传播
        x = self.c1(x)
        x = self.b1(x)
        x = self.a1(x)
        
        x = self.c2(x)
        x = self.b2(x)
        x = self.a2(x)
        x = self.p2(x)
        x = self.d2(x)
        
        
        x = self.c3(x)
        x = self.b3(x)
        x = self.a3(x)
        
        x = self.c4(x)
        x = self.b4(x)
        x = self.a4(x)
        x = self.p4(x)
        x = self.d4(x)
        
        
        x = self.c5(x)
        x = self.b5(x)
        x = self.a5(x)
        
        x = self.c6(x)
        x = self.b6(x)
        x = self.a6(x)
        
        x = self.c7(x)
        x = self.b7(x)
        x = self.a7(x)
        x = self.p7(x)
        x = self.d7(x)
        
        
        x = self.c8(x)
        x = self.b8(x)
        x = self.a8(x)
        
        x = self.c9(x)
        x = self.b9(x)
        x = self.a9(x)
        
        x = self.c10(x)
        x = self.b10(x)
        x = self.a10(x)
        x = self.p10(x)
        x = self.d10(x)
        
        
        x = self.c11(x)
        x = self.b11(x)
        x = self.a11(x)
        
        x = self.c12(x)
        x = self.b12(x)
        x = self.a12(x)
        
        x = self.c13(x)
        x = self.b13(x)
        x = self.a13(x)
        x = self.p13(x)
        x = self.d13(x)
        
        
        x = self.flatten(x)
        x = self.f1(x)
        x = self.d14(x)
        x = self.f2(x)
        x = self.d14(x)
        y = self.f3(x)
        return y


InceptionNet

诞生于2014年,引入了InceptionNet结构块,同义层网络内使用不同尺寸的卷积核,提升了模型感知力,使用批标准化,缓解了梯度消失

核心:InceptionNet结构块
通过1×1卷积核,作用于输入特征图的每个像素作用点。通过设定少于输入特征图深度的1×1卷积核个数,减少输出特征图深度,起到降维的作用,减少参数量和计算量


包含四个分支【没备注的都是卷积运算】
1×1
1×1 3×3
1×1 5×5
3×3(池化) 1×1
四个分支输出的特征数据尺寸相同,送入卷积连接器,卷积连接器将收到的数据按深度方向拼接,形成InceptionNet结构块输出

第一层为16个3*3卷积核,4个InceptionNet结构块,2个组成一个block,平均值池化,分类

class ConBNRelu(Model):
    def __init__(self,ch,kernel_size=3,strides=1,padding="same"):
        super(ConBNRelu,self).__init__()
        self.model=tf.keras.models.Sequential([
            tf.keras.layers.Conv2D(ch,kernel_size,strides=strides,padding=padding),
            tf.keras.layers.BatchNormalization(),
            tf.keras.layers.Activation("relu")
        ])
        
    def call(self,x):
        x=self.model(x)
        return x     

class InceptionBLK(Model):
    def __init__(self,ch,strides=1):
        super(InceptionBLK,self).__init__()
        self.ch=ch
        self.strides=strides
        
        self.c1=ConBNRelu(ch,kernel_size=1,strides=strides)
        
        self.c2_1=ConBNRelu(ch,kernel_size=1,strides=strides)
        self.c2_2=ConBNRelu(ch,kernel_size=3,strides=1)
        
        self.c3_1=ConBNRelu(ch,kernel_size=1,strides=strides)
        self.c3_2=ConBNRelu(ch,kernel_size=5,strides=1)
        
        self.p4_1=tf.keras.layers.MaxPool2D(3,strides=1,padding="same")
        self.c4_2=ConBNRelu(ch,kernel_size=1,strides=strides)
        
    def call(self,x):
        x1=self.c1(x)
        
        x2_1=self.c2_1(x)
        x2_2=self.c2_2(x2_1)
        
        x3_1=self.c3_1(x)
        x3_2=self.c3_2(x3_1)
        
        x4_1=self.p4_1(x)
        x4_2=self.c4_2(x4_1)
        
        x=tf.concat([x1,x2_2,x3_2,x4_2],axis=3)
        return x     

class InceptionNet(Model):
    def __init__(self,num_block,num_classes,init_ch=16,**kwargs):
        super(InceptionNet,self).__init__(**kwargs)
        self.inchannel=init_ch
        self.outchannel=init_ch
        self.init_ch=init_ch
        
        self.c1=ConBNRelu(init_ch)
        self.blocks=tf.keras.models.Sequential()
        for block_id in range(num_block):
            for layer_id in range(2):
                if layer_id==0:
                    block=InceptionBLK(self.outchannel,strides=2) #第一个InceptionNet结构块步长为2,使输出特征图尺寸减半
                else:
                    block=InceptionBLK(self.outchannel,strides=1)
                self.blocks.add(block)
            self.outchannel*=2
        self.p1=tf.keras.layers.GlobalAveragePooling2D()
        self.f1=tf.keras.layers.Dense(num_classes,activation="softmax")
    
    def call(self,x):
        x=self.c1(x)
        x=self.blocks(x)
        x=self.p1(x)
        y=self.f1(x)
        return y


model=InceptionNet(num_block=2,num_classes=10)        


ResNet

诞生于2015年,提出了层间残差跳连,引入前方信息,缓解梯度消失,使神经网络层数增加成为可能。resnet作者何凯明发现单纯堆叠神经网络层数,会使神经网络模型退化,以至于后面的特征丢失了前边特征的原本模样。
他用跳连线,将前面的特征直接接到了后面。ResNet块中的+是特征图对应元素值相加。

ResNet块有两种形式
堆叠卷积前后维度相同,可以直接相加
堆叠卷积前后维度不同,通过1×1卷积核改变维度

这里我跟着老师写的代码有问题,训练结果准确率一直是0.1左右,就从别的地方copy了一下

import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import Model
from tensorflow.keras.layers import Conv2D,BatchNormalization,Activation,GlobalAveragePooling2D,Dense
import pandas as pd

(x_train,y_train),(x_test,y_test)=tf.keras.datasets.cifar10.load_data()

x_train,x_test=x_train/255.0,x_test/255.0

class ResnetBlock(Model):
    def __init__(self, filters, strides=1, residual_path=False):
        super(ResnetBlock, self).__init__()
        self.filters = filters
        self.strides = strides
        self.residual_path = residual_path
        self.c1 = Conv2D(filters, (3, 3), strides=strides, padding='same', use_bias=False)
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.c2 = Conv2D(filters, (3, 3), strides=1, padding='same', use_bias=False)
        self.b2 = BatchNormalization()
        # residual_path为True时,对输入进行下采样,即用1x1的卷积核做卷积操作,保证x能和F(x)维度相同,顺利相加
        if residual_path:
            self.down_c1 = Conv2D(filters, (1, 1), strides=strides, padding='same', use_bias=False)
            self.down_b1 = BatchNormalization()
        self.a2 = Activation('relu')
    def call(self, inputs):
        residual = inputs  # residual等于输入值本身,即residual=x
        # 将输入通过卷积、BN层、激活层,计算F(x)
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        x = self.c2(x)
        y = self.b2(x)
        if self.residual_path:
            residual = self.down_c1(inputs)
            residual = self.down_b1(residual)
        out = self.a2(y + residual)  # 最后输出的是两部分的和,即F(x)+x或F(x)+Wx,再过激活函数
        return out

class ResNet18(Model):
    def __init__(self, block_list, initial_filters=64):  # block_list表示每个block有几个卷积层
        super(ResNet18, self).__init__()
        self.num_blocks = len(block_list)  # 共有几个block
        self.block_list = block_list
        self.out_filters = initial_filters
        self.c1 = Conv2D(self.out_filters, (3, 3), strides=1, padding='same', use_bias=False)
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.blocks = tf.keras.models.Sequential()
        # 构建ResNet网络结构
        for block_id in range(len(block_list)):  # 第几个resnet block
            for layer_id in range(block_list[block_id]):  # 第几个卷积层
                if block_id != 0 and layer_id == 0:  # 对除第一个block以外的每个block的输入进行下采样
                    block = ResnetBlock(self.out_filters, strides=2, residual_path=True)
                else:
                    block = ResnetBlock(self.out_filters, residual_path=False)
                self.blocks.add(block)  # 将构建好的block加入resnet
            self.out_filters *= 2  # 下一个block的卷积核数是上一个block的2倍
        self.p1 = tf.keras.layers.GlobalAveragePooling2D()
        self.f1 = tf.keras.layers.Dense(10, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())
    def call(self, inputs):
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        x = self.blocks(x)
        x = self.p1(x)
        y = self.f1(x)
        return y       

model=ResNet18([2,2,2,2])

model.compile(optimizer="adam",
             loss="sparse_categorical_crossentropy",
             metrics="sparse_categorical_accuracy")

checkpoint_save_path = "./VGGNet_cifar10.ckpt"
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path, save_weights_only=True, save_best_only=True)

history=model.fit(x_train,y_train,batch_size=400,epochs=5,validation_data=(x_test,y_test),validation_freq=1,callbacks=[cp_callback])        
pd.DataFrame(history.history).to_csv("cifar_training_log.csv",index=False)

graph=pd.read_csv("cifar_training_log.csv")
graph.plot(figsize=(5,4))
plt.xlim(0,4)
plt.ylim(0,2)
plt.grid(1)
  
plt.show()

num=np.random.randint(1,10000)
demo=tf.reshape(x_test[num],(1,32,32,3))
y_pred=np.argmax(model.predict(demo))
plt.imshow(x_test[num])
plt.show()
print("标签值:"+str(y_test[num,0])+"\n预测值:"+str(y_pred))
#飞机,汽车,鸟,猫,鹿,狗,青蛙,马,船,卡车

运行结果比较

LeNet

AlexNet

VGGNet

InceptionNet

ResNet

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值