MobileNet 网络学习笔记

MobileNet 网络

前言
MobileNet 网络是谷歌团队在2017年提出来的,专注于移动端或嵌入式设备中的轻量级网络。优势是以准确率略微降低为代价大大减少了模型参数与运算量。

MobileNet-V1

亮点:
1、DW卷积
2、增加超参数 α 、β (人为设定)
DW卷积的 输入特征矩阵channel = 输出特征矩阵channel
深度可分离卷积(Depthwise Separable Conv)
由DW(Depthwise Conv)卷积和PW(Pointwise Conv)卷积组成

在这里插入图片描述深度可分卷积与普通卷积计算量的对比

在这里插入图片描述mobilenet-v1网络

MobileNet-V2

亮点:
1、Inverted Residuals(倒残差结构)
2、Linear Bottlenecks

在这里插入图片描述
倒残差结构的激活函数是RELU6:
y = ReLU6(x) = min(max(x, 0), 6)在这里插入图片描述
RELU激活函数对低维特征信息造成大量损失
在这里插入图片描述
在这里插入图片描述
bottleneck 即倒残差结构
s仅仅代表每一个block的第一个bottleneck的步距,其他均为1

from torch import nn
import torch

def _make_divisible(ch, divisor=8, min_ch=None):
    if min_ch is None:
        min_ch = divisor
    new_ch = max(min_ch, int(ch + divisor / 2) // divisor * divisor)
    # Make sure that round down does not go down by more than 10%.
    if new_ch < 0.9 * ch:
        new_ch += divisor
    return new_ch

# conv-bn-relu6
# groups=1是普通卷积,如果groups=in_channel,那么就是DW卷积
class ConvBNReLU(nn.Sequential):
    def __init__(self, in_channel, out_channel, kernel_size=3, stride=1, groups=1):
        padding = (kernel_size - 1) // 2
        super(ConvBNReLU, self).__init__(
            nn.Conv2d(in_channel, out_channel, kernel_size, stride, padding, groups=groups, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.ReLU6(inplace=True)
        )
# 倒残差结构
# expand_ratio = t = 扩展因子
# hidden_channel = 第一层卷积核的个数 = t*k
class InvertedResidual(nn.Module):
    def __init__(self, in_channel, out_channel, stride, expand_ratio):
        super(InvertedResidual, self).__init__()
        hidden_channel = in_channel * expand_ratio
        #当stride=1且输入特征矩阵与输出特征矩阵shape相同时才有shortcut连接
        self.use_shortcut = stride == 1 and in_channel == out_channel

        layers = []
        # 对扩展因子t进行判断,如果等于1,就没有1*1的conv;如果不等于1,就有1*1的conv
        if expand_ratio != 1:
            # 1x1 pointwise conv
            layers.append(ConvBNReLU(in_channel, hidden_channel, kernel_size=1))
        layers.extend([
            # 3x3 depthwise conv
            # DW卷积输入特征矩阵channel = 输出特征矩阵channel
            # groups=hidden_channel
            ConvBNReLU(hidden_channel, hidden_channel, stride=stride, groups=hidden_channel),
            # 1x1 pointwise conv(linear) 线性结构(不添加激活函数)
            nn.Conv2d(hidden_channel, out_channel, kernel_size=1, bias=False),
            nn.BatchNorm2d(out_channel),
        ])

        self.conv = nn.Sequential(*layers)
    def forward(self, x):
        if self.use_shortcut:
            return x + self.conv(x)
        else:
            return self.conv(x)
# MobileNetV2,alpha是控制卷积核个数的倍率,
class MobileNetV2(nn.Module):
    def __init__(self, num_classes=5, alpha=1.0, round_nearest=8):
        super(MobileNetV2, self).__init__()
        block = InvertedResidual
        #_make_divisible函数把卷积核的个数调整为round_nearest的整数倍,应该是为了更好调用硬件设备
        input_channel = _make_divisible(32 * alpha, round_nearest)
        last_channel = _make_divisible(1280 * alpha, round_nearest)

        inverted_residual_setting = [
            # t, c, n, s
            [1, 16, 1, 1],
            [6, 24, 2, 2],
            [6, 32, 3, 2],
            [6, 64, 4, 2],
            [6, 96, 3, 1],
            [6, 160, 3, 2],
            [6, 320, 1, 1],
        ]

        features = []
        # conv1 layer
        features.append(ConvBNReLU(3, input_channel, stride=2))
        # building inverted residual residual blockes
        for t, c, n, s in inverted_residual_setting:
            output_channel = _make_divisible(c * alpha, round_nearest)
            for i in range(n):
                stride = s if i == 0 else 1
                features.append(block(input_channel, output_channel, stride, expand_ratio=t))
                input_channel = output_channel
        # building last several layers
        features.append(ConvBNReLU(input_channel, last_channel, 1))
        # combine feature layers
        self.features = nn.Sequential(*features)

        # building classifier
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.classifier = nn.Sequential(
            nn.Dropout(0.2),
            nn.Linear(last_channel, num_classes)
        )

        # weight initialization
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out')
                if m.bias is not None:
                    nn.init.zeros_(m.bias)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.ones_(m.weight)
                nn.init.zeros_(m.bias)
            elif isinstance(m, nn.Linear):
            #normal函数是生态分布函数,均值为0,方差为0.01
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.zeros_(m.bias)

    def forward(self, x):
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值