深度学习---ResNet网络结构

ResNet 网络是什么?为什么引入残差网络?

  1. resnet是残差网络结构,一般来讲,随着网络层数的增加,很容易造成“退化”和“梯度消失”的问题。所以提出了残差网络

残差模块的结构图

 

注意:

输入x 和卷积后的F(x)的维度必须相同

高于50层深度的resnet模型中,为了减少计算量,对残差模块进行了优化,将内部的两个3×3的layers换成1× 1 --> 3× 3 --> 1 ×1 三层layers,

第一层的1*1 :进行深度降维

第二层的3*3  :  提取特征

第三层的1*1 :   用于维度还原

 

ResNet的网络结构图

 

ResNet各个版本的网络结构图:

 

Resnet-50 

3,4,6,3,代表的是有多少个这样的残差模块堆叠

Resnet-34和Resnet-50  tensorflow实现模型搭建

第一种类型的残差模块:(3*3)

 def bn_block_1(self, in_put, bn_filter, strides=(1, 1), conv_shortcut=False):

        x = Conv2D(bn_filter, kernel_size=(3, 3), strides=strides, padding='same')(in_put)
        x = BatchNormalization(axis=3)(x)
        x = Activation('relu')(x)
        x = Conv2D(bn_filter, kernel_size=(3, 3), strides=(1, 1), padding='same')(x)
        x = BatchNormalization(axis=3)(x)
        #x = Activation('relu')(x)

        if conv_shortcut:
            shortcut = Conv2D(bn_filter, kernel_size=(3, 3), strides=strides, padding="same")(in_put)
            shortcut = BatchNormalization(axis=3)(shortcut)
            x = add([x, shortcut])
            x = Activation('relu')(x)
            return x
        else:
            x = keras.layers.add([x, in_put])
            x = Activation('relu')(x)
            return x

 

第二种类型的残差模块:(1*1 -> 3*3 -> 1*1)

 def bn_block_2(self, in_put, bn_filter, strides=(1, 1), conv_shortcut=False):
        k1, k2, k3 = bn_filter
        x = Conv2D(k1, kernel_size=1, strides=strides, padding='same')(in_put)
        #x = Dropout(0.2)(x)
        x = BatchNormalization(axis=3)(x)
        x = Activation('relu')(x)
        x = Conv2D(k2, kernel_size=3, strides=(1, 1), padding='same')(x)
        #x = Dropout(0.2)(x)
        x = BatchNormalization(axis=3)(x)
        x = Activation('relu')(x)
        x = Conv2D(k3, kernel_size=1, strides=(1, 1), padding='same')(x)
        #x = Dropout(0.2)(x)
        x = BatchNormalization(axis=3)(x)
        #x = Activation('relu')(x)
        if conv_shortcut:
            shortcut = Conv2D(k3, kernel_size=1, strides=strides, padding="same")(in_put)
            #shortcut = Dropout(0.2)(shortcut)
            shortcut = BatchNormalization(axis=3)(shortcut)
            x = add([x, shortcut])
            x = Activation('relu')(x)
            return x
        else:
            x = add([x, in_put])
            x = Activation('relu')(x)
            return x

Resnet-34结构:

    def resnet_34(self, input_shape):
        in_put = Input(input_shape)
        x = ZeroPadding2D((3, 3))(in_put)

        # conv1
        x = Conv2D(filters=64, kernel_size=(7, 7), strides=(2, 2), padding='valid')(x)
        x = BatchNormalization(axis=3)(x)
        x = Activation('relu')(x)
        x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same')(x)

        # conv2_x
        x = self.bn_block_1(x, bn_filter=64)
        x = self.bn_block_1(x, bn_filter=64)
        x = self.bn_block_1(x, bn_filter=64)

        # conv3_x
        x = self.bn_block_1(x, bn_filter=128, strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_1(x, bn_filter=128)
        x = self.bn_block_1(x, bn_filter=128)
        x = self.bn_block_1(x, bn_filter=128)

        # conv4_x
        x = self.bn_block_1(x, bn_filter=256, strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_1(x, bn_filter=256)
        x = self.bn_block_1(x, bn_filter=256)
        x = self.bn_block_1(x, bn_filter=256)
        x = self.bn_block_1(x, bn_filter=256)
        x = self.bn_block_1(x, bn_filter=256)

        # conv5_x
        x = self.bn_block_1(x, bn_filter=512, strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_1(x, bn_filter=512)
        x = self.bn_block_1(x, bn_filter=512)
        x = AveragePooling2D(pool_size=(7, 7))(x)
        x = Flatten()(x)
        x = Dense(self.type_num, activation='softmax')(x)

        model = Model(inputs=in_put, outputs=x)
        return model

Resnet-50模型结构:

 def resnet_50(self, input_shape):
        in_put = Input(input_shape)
        x = ZeroPadding2D((3, 3))(in_put)

        # conv1_x
        x = Conv2D(filters=64, kernel_size=(7, 7), strides=(2, 2), padding='valid')(x)
        #x = Dropout(0.2)(x)
        x = BatchNormalization(axis=3)(x)
        x = Activation('relu')(x)
        x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x)

        # conv2_x
        x = self.bn_block_2(x, bn_filter=[64, 64, 256], strides=(1, 1), conv_shortcut=True)
        x = self.bn_block_2(x, bn_filter=[64, 64, 256])
        x = self.bn_block_2(x, bn_filter=[64, 64, 256])

        # conv3_x
        x = self.bn_block_2(x, bn_filter=[128, 128, 512], strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_2(x, bn_filter=[128, 128, 512])
        x = self.bn_block_2(x, bn_filter=[128, 128, 512])
        x = self.bn_block_2(x, bn_filter=[128, 128, 512])

        # conv4_x
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024], strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024])
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024])
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024])
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024])
        x = self.bn_block_2(x, bn_filter=[256, 256, 1024])

        # conv5_x
        x = self.bn_block_2(x, bn_filter=[512, 512, 2048], strides=(2, 2), conv_shortcut=True)
        x = self.bn_block_2(x, bn_filter=[512, 512, 2048])
        x = self.bn_block_2(x, bn_filter=[512, 512, 2048])

        x = Dropout(0.3)(x)
        x = AveragePooling2D(pool_size=(7, 7))(x)
        x = Flatten()(x)
        x = Dense(self.type_num, activation='softmax')(x)
        #x = Dropout(0.2)(x)

        model = Model(inputs=in_put, outputs=x)
        return model

 

 

  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值