MobileNet V1(2017)/V2(2018)笔记及代码

MobileNet V1

1. 深度可分离卷积(depthwise separable convolution)

ZmTJhD.png

  • 深度卷积:将卷积核拆分成单通道,对特征图的每一通道使用不同的卷积核进行卷积

  • 逐点卷积:使用1x1卷积对特征图进行升维或降维
    ZnrY28.jpg

  • 计算量相比普通卷积下降为原来的 1 N 2 + 1 D k 2 \frac{1}{N^2}+\frac{1}{D_k^2} N21+Dk21,对于3x3卷积约下降到九分之一到八分之一:

    卷积核参数量卷积计算量
    普通卷积 D k ∗ D k ∗ M ∗ N D_k*D_k*M*N DkDkMN D k ∗ D k ∗ M ∗ N ∗ D w ∗ D h D_k*D_k*M*N*D_w*D_h DkDkMNDwDh
    深度可分离卷积 D k ∗ D k ∗ M + M ∗ N D_k*D_k*M+M*N DkDkM+MN ( D k ∗ D k ∗ M + M ∗ N ) ∗ D w ∗ D h (D_k*D_k*M+M*N)*D_w*D_h (DkDkM+MN)DwDh

2. 卷积块结构

  • DW + BN + ReLU6 + PW + BN + ReLU6
    ZnumPP.png

3. 网络结构

Znpp5j.png

  1. 步长为2的3x3标准卷积
  2. 堆叠深度可分离卷积,其中部分采用步长为2进行下采样
  3. 平均池化+全连接+softmax

4. 模型评估

  • 模型的计算主要集中在1x1卷积上
    Znp5Q0.png
  • 模型大小及计算量下降的同时精度只下降了1%
    ZnpIyV.png
    Zn9R0O.png

5. pytorch代码

class MobileNet(nn.Module):
    def __init__(self):
        super(MobileNet, self).__init__()

        def conv_bn(inp, oup, stride):
            return nn.Sequential(
                nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
                nn.BatchNorm2d(oup),
                nn.ReLU(inplace=True)
            )

        def conv_dw(inp, oup, stride):
            return nn.Sequential(
                nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
                nn.BatchNorm2d(inp),
                nn.ReLU(inplace=True),
    
                nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
                nn.ReLU(inplace=True),
            )

        self.model = nn.Sequential(
            conv_bn(  3,  32, 2), 
            conv_dw( 32,  64, 1),
            conv_dw( 64, 128, 2),
            conv_dw(128, 128, 1),
            conv_dw(128, 256, 2),
            conv_dw(256, 256, 1),
            conv_dw(256, 512, 2),
            conv_dw(512, 512, 1),
            conv_dw(512, 512, 1),
            conv_dw(512, 512, 1),
            conv_dw(512, 512, 1),
            conv_dw(512, 512, 1),
            conv_dw(512, 1024, 2),
            conv_dw(1024, 1024, 1),
            nn.AvgPool2d(7),
        )
        self.fc = nn.Linear(1024, 1000)

    def forward(self, x):
        x = self.model(x)
        x = x.view(-1, 1024)
        x = self.fc(x)
        return x

MobileNet V2

1. 线性瓶颈(Linear bottleneck)

  • 采用扩张-卷积-压缩结构,即在原来的深度可分离卷积前拿再添加一个逐点卷积进行升维(通道数上升维六倍),使深度卷积能在更高维空间中进行卷积操作提取特征
    ZnZx4f.png
  • 修改最后一个ReLU6为线性层,以减小ReLU对特征的破坏(论文中有对ReLU的讨论)

2. 反残差结构

  • 通道数变化与ResNet的残差结构相反:
    ZneRxg.png
    • ResNet先降维再卷积再升维
    • MobileNet V2先升维再卷积再降维
  • 采用ResNet类似的直连结构以复用特征
    ZnnHvF.png

3. 卷积块结构

  • 步长为1时添加shortcut,步长为2时不添加
    Zns1L4.png

4. 模型结构

ZnKpon.png

  • t: 输入通道的倍增系数
  • n: 该模块重复次数
  • c: 输出通道数
  • s: 卷积步长
  • 使用1x1卷积代替最后的全连接层

5. 模型评估

ZnsQQU.png

6. pytorch代码

import torch.nn as nn
import math


def conv_bn(inp, oup, stride):
    return nn.Sequential(
        nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
        nn.BatchNorm2d(oup),
        nn.ReLU6(inplace=True)
    )


def conv_1x1_bn(inp, oup):
    return nn.Sequential(
        nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
        nn.BatchNorm2d(oup),
        nn.ReLU6(inplace=True)
    )


class InvertedResidual(nn.Module):
    def __init__(self, inp, oup, stride, expand_ratio):
        super(InvertedResidual, self).__init__()
        self.stride = stride
        assert stride in [1, 2]

        hidden_dim = round(inp * expand_ratio)
        self.use_res_connect = self.stride == 1 and inp == oup

        if expand_ratio == 1:
            self.conv = nn.Sequential(
                # dw
                nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
                nn.BatchNorm2d(hidden_dim),
                nn.ReLU6(inplace=True),
                # pw-linear
                nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
            )
        else:
            self.conv = nn.Sequential(
                # pw
                nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
                nn.BatchNorm2d(hidden_dim),
                nn.ReLU6(inplace=True),
                # dw
                nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
                nn.BatchNorm2d(hidden_dim),
                nn.ReLU6(inplace=True),
                # pw-linear
                nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
            )

    def forward(self, x):
        if self.use_res_connect:
            return x + self.conv(x)
        else:
            return self.conv(x)


class MobileNetV2(nn.Module):
    def __init__(self, n_class=1000, input_size=224, width_mult=1.):
        super(MobileNetV2, self).__init__()
        block = InvertedResidual
        input_channel = 32
        last_channel = 1280
        interverted_residual_setting = [
            # t, c, n, s
            [1, 16, 1, 1],
            [6, 24, 2, 2],
            [6, 32, 3, 2],
            [6, 64, 4, 2],
            [6, 96, 3, 1],
            [6, 160, 3, 2],
            [6, 320, 1, 1],
        ]

        # building first layer
        assert input_size % 32 == 0
        input_channel = int(input_channel * width_mult)
        self.last_channel = int(last_channel * width_mult) if width_mult > 1.0 else last_channel
        self.features = [conv_bn(3, input_channel, 2)]
        # building inverted residual blocks
        for t, c, n, s in interverted_residual_setting:
            output_channel = int(c * width_mult)
            for i in range(n):
                if i == 0:
                    self.features.append(block(input_channel, output_channel, s, expand_ratio=t))
                else:
                    self.features.append(block(input_channel, output_channel, 1, expand_ratio=t))
                input_channel = output_channel
        # building last several layers
        self.features.append(conv_1x1_bn(input_channel, self.last_channel))
        # make it nn.Sequential
        self.features = nn.Sequential(*self.features)

        # building classifier
        self.classifier = nn.Sequential(
            nn.Dropout(0.2),
            nn.Linear(self.last_channel, n_class),
        )

        self._initialize_weights()

    def forward(self, x):
        x = self.features(x)
        x = x.mean(3).mean(2)
        x = self.classifier(x)
        return x

    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                n = m.weight.size(1)
                m.weight.data.normal_(0, 0.01)
                m.bias.data.zero_()

参考

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值