Pytorch猫狗大战系列:
猫狗大战1-训练和测试自己的数据集
猫狗大战2-AlexNet
猫狗大战3-MobileNet_V1&V2
猫狗大战3-MobileNet_V3
TensorFlow 2.0猫狗大战系列
猫狗大战1、制作与读取record数据
猫狗大战2、训练与保存模型
文章目录
一、Mobilenet系列理论整理
看我的另外一篇博客笔记, 传送门
MobileNet相关知识整理
二、Depthwise Separable Convolution和MobileNet_v1 分类网络结构
Depthwise Separable Convolution 可以大大减少卷积计算的参数量, 来看Pytorch的实现
下图中,左边是普通卷积,右边的深度可分离卷积看pytorch代码
def conv_bn(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True)
)
def conv_dw(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
nn.BatchNorm2d(inp),
nn.ReLU(inplace=True),
nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True),
)
再看mobileNet-v1的网络结构
看pytorch实现,还有关于mobilenet的两个超参数, 这里默认为1,不做任何改动
class MobileNetV1(nn.Module):
def __init__(self, num_classes=2):
super(MobileNetV1, self).__init__()
def conv_bn(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True)
)
def conv_dw(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
nn.BatchNorm2d(inp),
nn.ReLU(inplace=True),
nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True),
)
self.model = nn.Sequential(
conv_bn(3, 32, 2),
conv_dw(32, 64, 1),
conv_dw(64, 128, 2),
conv_dw(128, 128, 1),
conv_dw(128, 256, 2),
conv_dw(256, 256, 1),
conv_dw(256, 512, 2),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 1024, 2),
conv_dw(1024, 1024, 1),
nn.AvgPool2d(7),
)
self.fc = nn.Linear(1024, num_classes)
def forward(self, x):
x = self.model(x)
x = x.view(-1, 1024)
x = self.fc(x)
return x
在猫狗大战数据集上准确率轻松达到0.98
三、MobileNet_V2
3.1 和MobileNet_v1的区别
转自https://blog.csdn.net/mzpmzk/article/details/82976871
3.2 和ResNet的区别
torchvison里有v2的实现
net = models.MobileNetV2(num_classes=2)
但50次epoch后,准确率只达到了0.78, 虽然还在进一步下降