【MobileNetV1 MobileNetV2 MobileNetV3】

本文详细介绍了MobileNet系列的三个版本:V1、V2和V3的设计原理与实现。MobileNet以其深度可分离卷积降低计算量和参数数量,V2引入线性瓶颈结构和逆向残差,V3采用h-swish激活函数和SE模块,优化性能。这些模型在移动设备和资源受限场景下表现出色。
摘要由CSDN通过智能技术生成


前言

案例代码https://github.com/2012Netsky/pytorch_cnn/blob/main/4_time_series_bikes.ipynb

同类参考:https://zhuanlan.zhihu.com/p/261110039

MobileNet是一个轻量级卷积神经网络,它进行卷积的参数比标准卷积要少很多。

(1)MobileNet的基本单元是深度级可分离卷积,其可以分解为两个更小的操作:depthwise convolution和pointwise convolution

Depthwise convolution和标准卷积不同,对于标准卷积其卷积核是用在所有的输入通道上(input channels),而depthwise针对每个输入通道采用不同的卷积核,就是说一个卷积核对应一个输入通道。一般由M个nn1的卷积核构成,M是输入数据的深度。如下👉图所示。

而pointwise convolution其由N个11M个卷积核构成,其中N是输出数据的深度。如下👈所示
在这里插入图片描述
2)Depthwise convolution与标准卷积的不同之处在于进行卷积的操作,如下图所示(左边的Depthwise的卷积,右边的为标准卷积的)
在这里插入图片描述
总体下来的卷积如下图所示
在这里插入图片描述
(3)对比标准卷积的计算量

与标准卷积核的参数量对比

假设输入一张 DDM的特征图,输出特征图是DDN

标准KK卷积核计算量为:DDMNKK
可分离卷积中Depthwise卷积计算量为:KKMDD

Pointwise卷积的计算量为:MND*D

separate总的计算量就为: KKMDD+ MND*D 进行相除
在这里插入图片描述
显然,参数量比标准卷积少得多。

一、 MobileNetV1


#-------------------------------------------------------------#
#   MobileNet的网络部分
#-------------------------------------------------------------#
from tensorflow.keras import backend as K
from tensorflow.keras.initializers import RandomNormal
from tensorflow.keras.layers import (Activation, BatchNormalization, Conv2D,
                                     DepthwiseConv2D)
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l2


def _depthwise_conv_block(inputs, pointwise_conv_filters, alpha,
                          depth_multiplier=1, strides=(1, 1), block_id=1):

    pointwise_conv_filters = int(pointwise_conv_filters * alpha)
    
    # 深度可分离卷积
    x = DepthwiseConv2D((3, 3),
                        padding='same',
                        depth_multiplier=depth_multiplier,
                        depthwise_initializer=RandomNormal(stddev=0.02),
                        strides=strides,
                        use_bias=False,
                        name='conv_dw_%d' % block_id)(inputs)
    x = BatchNormalization(name='conv_dw_%d_bn' % block_id)(x)
    x = Activation(relu6, name='conv_dw_%d_relu' % block_id)(x)

    # 1x1卷积
    x = Conv2D(pointwise_conv_filters, (1, 1),
               kernel_initializer=RandomNormal(stddev=0.02),
               padding='same',
               use_bias=False,
               strides=(1, 1),
               name='conv_pw_%d' % block_id)(x)
    x = BatchNormalization(name='conv_pw_%d_bn' % block_id)(x)
    return Activation(relu6, name='conv_pw_%d_relu' % block_id)(x)

def _conv_block(inputs, filters, alpha, kernel=(3, 3), strides=(1, 1)):
    filters = int(filters * alpha)
    x = Conv2D(filters, kernel, kernel_initializer=RandomNormal(stddev=0.02),
                      padding='same',
                      use_bias=False,
                      strides=strides,
                      name='conv1')(inputs)
    x = BatchNormalization(name='conv1_bn')(x)
    return Activation(relu6, name='conv1_relu')(x)

def relu6(x):
    return K.relu(x, max_value=6)


def MobileNetV1(inputs,alpha=1,depth_multiplier=1):
    if alpha not in [0.25, 0.5, 0.75, 1.0]:
        raise ValueError('Unsupported alpha - `{}` in MobilenetV1, Use 0.25, 0.5, 0.75, 1.0'.format(alpha))

    # 416,416,3 -> 208,208,32
    x = _conv_block(inputs, 32, alpha, strides=(2, 2))
    # 208,208,32 -> 208,208,64
    x = _depthwise_conv_block(x, 64, alpha, depth_multiplier, block_id=1)

    # 208,208,64 -> 104,104,128
    x = _depthwise_conv_block(x, 128, alpha, depth_multiplier,
                              strides=(2, 2), block_id=2)
    x = _depthwise_conv_block(x, 128, alpha, depth_multiplier, block_id=3)

    # 104,104.128 -> 64,64,256
    x = _depthwise_conv_block(x, 256, alpha, depth_multiplier,
                              strides=(2, 2), block_id=4)
    x = _depthwise_conv_block(x, 256, alpha, depth_multiplier, block_id=5)
    feat1 = x

    # 64,64,256 -> 32,32,512
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier,
                              strides=(2, 2), block_id=6)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=7)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=8)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=9)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=10)
    x = _depthwise_conv_block(x, 512, alpha, depth_multiplier, block_id=11)
    feat2 = x

    # 32,32,512 -> 16,16,1024
    x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier,
                              strides=(2, 2), block_id=12)
    x = _depthwise_conv_block(x, 1024, alpha, depth_multiplier, block_id=13)
    feat3 = x

    return feat1,feat2,feat3

if __name__ == "__main__":
    from tensorflow.keras.layers import Input
    from tensorflow.keras.models import Model
    alpha   = 0.25
    inputs  = Input([None,None,3])
    outputs = MobileNetV1(inputs,alpha=alpha)
    model   = Model(inputs,outputs)
    model.summary()

二、 MobileNetV2

#-------------------------------------------------------------#
#   MobileNetV2的网络部分
#-------------------------------------------------------------#
from tensorflow.keras import backend
from tensorflow.keras.layers import (Activation, Add, BatchNormalization,
                                     Conv2D, DepthwiseConv2D,
                                     ZeroPadding2D)
from tensorflow.keras.initializers import RandomNormal


# relu6!
def relu6(x):
    return backend.relu(x, max_value=6)
    
# 用于计算padding的大小
def correct_pad(inputs, kernel_size):
    img_dim = 1
    input_size = backend.int_shape(inputs)[img_dim:(img_dim + 2)]

    if isinstance(kernel_size, int):
        kernel_size = (kernel_size, kernel_size)

    if input_size[0] is None:
        adjust = (1, 1)
    else:
        adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)

    correct = (kernel_size[0] // 2, kernel_size[1] // 2)

    return ((correct[0] - adjust[0], correct[0]),
            (correct[1] - adjust[1], correct[1]))

# 使其结果可以被8整除,因为使用到了膨胀系数α
def _make_divisible(v, divisor, min_value=None):
    if min_value is None:
        min_value = divisor
    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
    if new_v < 0.9 * v:
        new_v += divisor
    return new_v

def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id):
    in_channels = backend.int_shape(inputs)[-1]
    pointwise_conv_filters = int(filters * alpha)
    pointwise_filters = _make_divisible(pointwise_conv_filters, 8)

    x = inputs
    prefix = 'block_{}_'.format(block_id)
    # part1 数据扩张
    if block_id:
        # Expand
        x = Conv2D(expansion * in_channels,
                    kernel_initializer=RandomNormal(stddev=0.02),
                    kernel_size=1,
                    padding='same',
                    use_bias=False,
                    activation=None,
                    name=prefix + 'expand')(x)
        x = BatchNormalization(epsilon=1e-3,
                                momentum=0.999,
                                name=prefix + 'expand_BN')(x)
        x = Activation(relu6, name=prefix + 'expand_relu')(x)
    else:
        prefix = 'expanded_conv_'

    if stride == 2:
        x = ZeroPadding2D(padding=correct_pad(x, 3),
                                 name=prefix + 'pad')(x)
    
    # part2 可分离卷积
    x = DepthwiseConv2D(kernel_size=3,
                        depthwise_initializer=RandomNormal(stddev=0.02),
                        strides=stride,
                        activation=None,
                        use_bias=False,
                        padding='same' if stride == 1 else 'valid',
                        name=prefix + 'depthwise')(x)
    x = BatchNormalization(epsilon=1e-3,
                            momentum=0.999,
                            name=prefix + 'depthwise_BN')(x)

    x = Activation(relu6, name=prefix + 'depthwise_relu')(x)

    # part3压缩特征,而且不使用relu函数,保证特征不被破坏
    x = Conv2D(pointwise_filters,
                kernel_initializer=RandomNormal(stddev=0.02),
                kernel_size=1,
                padding='same',
                use_bias=False,
                activation=None,
                name=prefix + 'project')(x)

    x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'project_BN')(x)

    if in_channels == pointwise_filters and stride == 1:
        return Add(name=prefix + 'add')([inputs, x])
    return x

def MobileNetV2(inputs, alpha=1.0):
    if alpha not in [0.5, 0.75, 1.0, 1.3]:
        raise ValueError('Unsupported alpha - `{}` in MobilenetV2, Use 0.5, 0.75, 1.0, 1.3'.format(alpha))
    # stem部分
    first_block_filters = _make_divisible(32 * alpha, 8)
    x = ZeroPadding2D(padding=correct_pad(inputs, 3),
                             name='Conv1_pad')(inputs)
    # 416,416,3 -> 208,208,32
    x = Conv2D(first_block_filters,
                kernel_initializer=RandomNormal(stddev=0.02),
                kernel_size=3,
                strides=(2, 2),
                padding='valid',
                use_bias=False,
                name='Conv1')(x)
    x = BatchNormalization(epsilon=1e-3,
                            momentum=0.999,
                            name='bn_Conv1')(x)
    x = Activation(relu6, name='Conv1_relu')(x)

    # 208,208,32 -> 208,208,16
    x = _inverted_res_block(x, filters=16, alpha=alpha, stride=1,
                            expansion=1, block_id=0)
    # 208,208,16 -> 104,104,24
    x = _inverted_res_block(x, filters=24, alpha=alpha, stride=2,
                            expansion=6, block_id=1)
    x = _inverted_res_block(x, filters=24, alpha=alpha, stride=1,
                            expansion=6, block_id=2)

    # 104,104,24 -> 52,52,32
    x = _inverted_res_block(x, filters=32, alpha=alpha, stride=2,
                            expansion=6, block_id=3)
    x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1,
                            expansion=6, block_id=4)
    x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1,
                            expansion=6, block_id=5)
    feat1 = x

    # 52,52,32 -> 26,26,96
    x = _inverted_res_block(x, filters=64, alpha=alpha, stride=2,
                            expansion=6, block_id=6)
    x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1,
                            expansion=6, block_id=7)
    x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1,
                            expansion=6, block_id=8)
    x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1,
                            expansion=6, block_id=9)
    x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1,
                            expansion=6, block_id=10)
    x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1,
                            expansion=6, block_id=11)
    x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1,
                            expansion=6, block_id=12)
    feat2 = x

    # 26,26,96 -> 13,13,320
    x = _inverted_res_block(x, filters=160, alpha=alpha, stride=2,
                            expansion=6, block_id=13)
    x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,
                            expansion=6, block_id=14)
    x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1,
                            expansion=6, block_id=15)
    x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1,
                            expansion=6, block_id=16)
    feat3 = x

    return feat1,feat2,feat3



三、 MobileNetV3

from tensorflow.keras import backend
from tensorflow.keras.layers import (Activation, Add, BatchNormalization,
                                     Conv2D, DepthwiseConv2D,
                                     GlobalAveragePooling2D, Multiply, Reshape)
from tensorflow.keras.initializers import RandomNormal


def _activation(x, name='relu'):
    if name == 'relu':
        return Activation('relu')(x)
    elif name == 'hardswish':
        return hard_swish(x)

def hard_sigmoid(x):
    return backend.relu(x + 3.0, max_value=6.0) / 6.0

def hard_swish(x):
    return Multiply()([Activation(hard_sigmoid)(x), x])

def _make_divisible(v, divisor=8, min_value=None):
    if min_value is None:
        min_value = divisor
    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
    # Make sure that round down does not go down by more than 10%.
    if new_v < 0.9 * v:
        new_v += divisor
    return new_v

def _bneck(inputs, expansion, alpha, out_ch, kernel_size, stride, se_ratio, activation,
                        block_id):
    channel_axis = 1 if backend.image_data_format() == 'channels_first' else -1

    in_channels = backend.int_shape(inputs)[channel_axis]
    out_channels = _make_divisible(out_ch * alpha, 8)
    exp_size = _make_divisible(in_channels * expansion, 8)
    x = inputs
    prefix = 'expanded_conv/'
    if block_id:
        # Expand
        prefix = 'expanded_conv_{}/'.format(block_id)
        x = Conv2D(exp_size,
                    kernel_initializer=RandomNormal(stddev=0.02),
                    kernel_size=1,
                    padding='same',
                    use_bias=False,
                    name=prefix + 'expand')(x)
        x = BatchNormalization(axis=channel_axis,
                                      name=prefix + 'expand/BatchNorm')(x)
        x = _activation(x, activation)

    x = DepthwiseConv2D(kernel_size,
                        depthwise_initializer=RandomNormal(stddev=0.02),
                        strides=stride,
                        padding='same',
                        dilation_rate=1,
                        use_bias=False,
                        name=prefix + 'depthwise')(x)
    x = BatchNormalization(axis=channel_axis,
                                  name=prefix + 'depthwise/BatchNorm')(x)
    x = _activation(x, activation)

    if se_ratio:
        reduced_ch = _make_divisible(exp_size * se_ratio, 8)
        y = GlobalAveragePooling2D(name=prefix + 'squeeze_excite/AvgPool')(x)
        y = Reshape([1, 1, exp_size], name=prefix + 'reshape')(y)
        y = Conv2D(reduced_ch,
                    kernel_initializer=RandomNormal(stddev=0.02),
                    kernel_size=1,
                    padding='same',
                    use_bias=True,
                    name=prefix + 'squeeze_excite/Conv')(y)
        y = Activation("relu", name=prefix + 'squeeze_excite/Relu')(y)
        y = Conv2D(exp_size,
                    kernel_initializer=RandomNormal(stddev=0.02),
                    kernel_size=1,
                    padding='same',
                    use_bias=True,
                    name=prefix + 'squeeze_excite/Conv_1')(y)
        x = Multiply(name=prefix + 'squeeze_excite/Mul')([Activation(hard_sigmoid)(y), x])

    x = Conv2D(out_channels,
                kernel_initializer=RandomNormal(stddev=0.02),
                kernel_size=1,
                padding='same',
                use_bias=False,
                name=prefix + 'project')(x)
    x = BatchNormalization(axis=channel_axis,
                                  name=prefix + 'project/BatchNorm')(x)

    if in_channels == out_channels and stride == 1:
        x = Add(name=prefix + 'Add')([inputs, x])
    return x

def MobileNetV3(inputs, alpha=1.0, kernel=5, se_ratio=0.25):
    if alpha not in [0.75, 1.0]:
        raise ValueError('Unsupported alpha - `{}` in MobilenetV3, Use 0.75, 1.0.'.format(alpha))
    # 416,416,3 -> 208,208,16
    x = Conv2D(16,kernel_size=3,strides=(2, 2),padding='same',
                    kernel_initializer=RandomNormal(stddev=0.02),
                    use_bias=False,
                    name='Conv')(inputs)
    x = BatchNormalization(axis=-1,
                            epsilon=1e-3,
                            momentum=0.999,
                            name='Conv/BatchNorm')(x)
    x = Activation(hard_swish)(x)

    # 208,208,16 -> 208,208,16
    x = _bneck(x, 1, 16, alpha, 3, 1, None, 'relu', 0)

    # 208,208,16 -> 104,104,24
    x = _bneck(x, 4, 24, alpha, 3, 2, None, 'relu', 1)
    x = _bneck(x, 3, 24, alpha, 3, 1, None, 'relu', 2)
    
    # 104,104,24 -> 52,52,40
    x = _bneck(x, 3, 40, alpha, kernel, 2, se_ratio, 'relu', 3)
    x = _bneck(x, 3, 40, alpha, kernel, 1, se_ratio, 'relu', 4)
    x = _bneck(x, 3, 40, alpha, kernel, 1, se_ratio, 'relu', 5)
    feat1 = x

    # 52,52,40 -> 26,26,112
    x = _bneck(x, 6, 80, alpha, 3, 2, None, 'hardswish', 6)
    x = _bneck(x, 2.5, 80, alpha, 3, 1, None, 'hardswish', 7)
    x = _bneck(x, 2.3, 80, alpha, 3, 1, None, 'hardswish', 8)
    x = _bneck(x, 2.3, 80, alpha, 3, 1, None, 'hardswish', 9)
    x = _bneck(x, 6, 112, alpha, 3, 1, se_ratio, 'hardswish', 10)
    x = _bneck(x, 6, 112, alpha, 3, 1, se_ratio, 'hardswish', 11)
    feat2 = x

    # 26,26,112 -> 13,13,160
    x = _bneck(x, 6, 160, alpha, kernel, 2, se_ratio, 'hardswish', 12)
    x = _bneck(x, 6, 160, alpha, kernel, 1, se_ratio, 'hardswish', 13)
    x = _bneck(x, 6, 160, alpha, kernel, 1, se_ratio, 'hardswish', 14)
    feat3 = x

    return feat1,feat2,feat3


总结

V1

  • 深度可分离卷积: DW 和 PW 组合替换常规卷积来达到减少参数数量的目的
  • 超参数: 改变输入输出通道数和特征尺寸

V2

  • 线性瓶颈结构
  • 逆向残差结构

V3

  • h-swish 激活函数
  • SENet 结构
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

【网络星空】

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值