论文链接:https://arxiv.org/abs/1704.04861
MobileNetV2网络是2018年04月发布的,沿用了V1的深度可分离卷积,主要创新点是逆转残差(Inverted Residuals)和线性瓶颈层(Linear Bottlenecks)。
线性瓶颈层(Linear Bottlenecks)
指的是在bottleneck模块的最后使用的是线性转换而不是ReLU。作者分析认为ReLU破坏了特征图的通道,导致丢失了信息。但是如果有很多的通道,激活信息仍然被保留在其他的通道。
作者认为满足感兴趣流形区域(manifold of interest)坐落在高维激活空间的低维子空间的条件下:
- 感兴趣流形区域在ReLU之后保持了非0,相当于线性转换。
- ReLU能够保持输入流性的完整性。
于是,作者提出了线性瓶颈层( Linear Bottlenecks),有效地防止过多信息被破坏。实验也确实验证了猜想,bottleneck的非线性会破坏性能。
逆转残差(Inverted Residuals)
(a)是普通的残差bottleneck模块,输入的特征图经过1x1和3x3的卷积进行压缩,再使用1x1的卷积进行扩张还原厚度,且每经过卷积处理都需要进行ReLU的非线性激活;
(b)是逆转的残差bottleneck模块,输入的特征图经过1x1和3x3的卷积进行扩张(即为expansion factor
t
>
1
t>1
t>1,若
t
<
1
t<1
t<1,则为传统的残差卷积模块),再使用1x1的卷积进行压缩还原厚度,且最后的1x1卷积使用的是线性处理,确保了信息不被丢失。
对于逆转的残差bottleneck模块,使用shortcut层的目的于传统的残差模块相同,为了提升梯度传播的能力。同时,逆转的设计非常大的内存效率,同时轻微地提升了性能。
整体的网络结构
、
其中
c
c
c表示输出特征图的channel,
n
n
n表示层的重复次数,
s
s
s表示stride。使用ReLU6作为非线性,因为在低精度下运算下比较鲁棒。输出是全卷积而非 softmax,k 就是识别目标的类别数目。
实现代码:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block(nn.Module):
'''expand + depthwise + pointwise'''
def __init__(self, in_planes, out_planes, expansion, stride):
super(Block, self).__init__()
self.stride = stride
planes = expansion * in_planes
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1,
stride=1, padding=0, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=stride, padding=1, groups=planes,
bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, out_planes, kernel_size=1,
stride=1, padding=0, bias=False)
self.bn3 = nn.BatchNorm2d(out_planes)
self.shortcut = nn.Sequential()
if stride == 1 and in_planes != out_planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, out_planes, kernel_size=1,
stride=1, padding=0, bias=False),
nn.BatchNorm2d(out_planes),
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out = out + self.shortcut(x) if self.stride==1 else out
return out
class MobileNetV2(nn.Module):
# (expansion, out_planes, num_blocks, stride)
cfg = [(1, 16, 1, 1),
(6, 24, 2, 1), # NOTE: change stride 2 -> 1 for CIFAR10
(6, 32, 3, 2),
(6, 64, 4, 2),
(6, 96, 3, 1),
(6, 160, 3, 2),
(6, 320, 1, 1)]
def __init__(self, num_classes=10):
super(MobileNetV2, self).__init__()
# NOTE: change conv1 stride 2 -> 1 for CIFAR10
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1,
padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32)
self.conv2 = nn.Conv2d(320, 1280, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(1280)
self.linear = nn.Linear(1280, num_classes)
def _make_layers(self, in_planes):
layers = []
for expansion, out_planes, num_blocks, stride in self.cfg:
strides = [stride] + [1]*(num_blocks-1)
for stride in strides:
layers.append(
Block(in_planes, out_planes, expansion, stride))
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.relu(self.bn2(self.conv2(out)))
# NOTE: change pooling kernel_size 7 -> 4 for CIFAR10
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def test():
net = MobileNetV2()
x = torch.randn(2,3,32,32)
y = net(x)
print(y.size())
# test()