SqueezeNet解读及其pytorch实现

原论文:SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE

摘要

对CNN的研究主要集中在提高准确性上,这篇文章则不同,关注CNN更小的架构。较小的架构具有如下优点:

  • 分布式训练更加高效(服务器间需要较少的交流)
  • 新模型导入到客户端时开销更小(使得频繁的更新可行)
  • FPGA的部署更加可行(FPGA内存受限)

设计策略

  • 将3x3卷积核替换为1x1卷积核;因为1x1卷积核参数比3x3卷积核参数少9倍。
  • 减少输入通道的数量;一个3x3卷积层上的参数总数为(输入通道数)x(卷积核数)x 3 x 3。
  • 延迟下采样;可以在网络早期具有较大的特征映射,学到的特征更加丰富,因此可以提高分类精度。

设计了一个Fire模块来满足设计策略

在这里插入图片描述

  • fire模块中自由使用1x1卷积核是策略1的应用;
  • 压缩部分将s1x1设为小于(e1x1 + e3x3)是策略2的应用;
    在这里插入图片描述
    整个网络中使用了相对较晚的池化层,这是策略3的应用。

pytorch实现以及各阶段特征映射变化

import torch
import numpy as np
import torch.nn as nn
from torch.nn import init

x = np.random.rand(96, 3, 224, 224)
x = torch.tensor(x, dtype=torch.float32)



class Fire(nn.Module):

    def __init__(self, inplanes, squeeze_planes,
                 expand1x1_planes, expand3x3_planes):
        super(Fire, self).__init__()
        self.inplanes = inplanes
        self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
        self.squeeze_activation = nn.ReLU(inplace=True)
        self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
                                   kernel_size=1)
        self.expand1x1_activation = nn.ReLU(inplace=True)
        self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
                                   kernel_size=3, padding=1)
        self.expand3x3_activation = nn.ReLU(inplace=True)

    def forward(self, x):
        x = self.squeeze_activation(self.squeeze(x))
        return torch.cat([
            self.expand1x1_activation(self.expand1x1(x)),
            self.expand3x3_activation(self.expand3x3(x))
        ], 1)


class SqueezeNet(nn.Module):

    def __init__(self, version=1.0, num_classes=1000):
        super(SqueezeNet, self).__init__()
        if version not in [1.0, 1.1]:
            raise ValueError("Unsupported SqueezeNet version {version}:"
                             "1.0 or 1.1 expected".format(version=version))
        self.num_classes = num_classes
        if version == 1.0:
            self.features = nn.Sequential(
                nn.Conv2d(3, 96, kernel_size=7, stride=2),  # (96, 96, 109, 109)
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),  # (96, 96, 54, 54)
                Fire(96, 16, 64, 64),  # (96, 128, 54, 54)
                Fire(128, 16, 64, 64),  # (96, 128, 54, 54)
                Fire(128, 32, 128, 128),  # (96, 256, 54, 54)
                nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True),  # (96, 256, 27, 27)
                Fire(256, 32, 128, 128),  # (96, 256, 27, 27)
                Fire(256, 48, 192, 192),  # (96, 384, 27, 27)
                Fire(384, 48, 192, 192),  # (96, 384, 27, 27)
                Fire(384, 64, 256, 256),  # (96, 512, 27, 27)
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),  # (96, 512, 13, 13)
                Fire(512, 64, 256, 256),  # (96, 512, 13, 13)
            )
        else:
            self.features = nn.Sequential(
                nn.Conv2d(3, 64, kernel_size=3, stride=2),
                nn.ReLU(inplace=True),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(64, 16, 64, 64),
                Fire(128, 16, 64, 64),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(128, 32, 128, 128),
                Fire(256, 32, 128, 128),
                nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
                Fire(256, 48, 192, 192),
                Fire(384, 48, 192, 192),
                Fire(384, 64, 256, 256),
                Fire(512, 64, 256, 256),
            )
        # Final convolution is initialized differently form the rest
        final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
        self.classifier = nn.Sequential(
            nn.Dropout(p=0.5),
            final_conv,
            nn.ReLU(inplace=True),
            nn.AdaptiveAvgPool2d((1, 1))
        )

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                if m is final_conv:
                    init.normal_(m.weight, mean=0.0, std=0.01)
                else:
                    init.kaiming_uniform_(m.weight)
                if m.bias is not None:
                    init.constant_(m.bias, 0)

    def forward(self, x):
        x = self.features(x)
        x = self.classifier(x)
        x.view(x.size(0), self.num_classes)
        return x


model = SqueezeNet()
x = model(x)
print(x.shape)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值