0x00 参考来源
CNNDetection/train.py at master · PeterWang512/CNNDetection
0x01 前言
此系列是各种神经网络的pytorch实现的模版和代码讲解,本篇是ResNet网络的讲解和源代码分享,最近在做比赛用到这个网络了,就写了一点,有点潦草,后续再慢慢优化。
0x02 代码讲解
注:我是从最小的操作开始,一层一层的向外扩着讲解的。
-
残差块里用到的卷积层操作
残差块中需要用到的两个基本卷积层,单独写成一个函数,方便后面调用。这里的数字指kernel的大小,两者都是调用
nn.Conv2d
,区别在于kernel_size 不同。-
conv1x1
def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
-
conv3x3
def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
-
-
残差块
ResNet中残差块分为了两种模式,18和34层的使用的是
BasicBlock
,50及更高层使用的是Bottleneck
,主要还是要减少网络中的参数(因为层数太多了)注:图的话等等再画吧
接下来我们分析这两种残差块的代码:
-
BasicBlock
首先我们要有一个整体的概念:
每个残差块里面有两个conv3x3卷积层,但这些卷积层的输出通道数都一样。
我们先来看看需要接收和初始化哪些参数
def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride
可以看出,想创建一个残差块,有以下参数需要配置:
inplanes
:需要执行这个残差块中第一个卷积层的输入通道数planes
:残差块中两个卷积层的输出维度,一样的(上面有解释)stride
:每个残差块的一个卷积层的步幅,用于调整输出维度(扩大一倍),因为四个残差模块每次的维度依次翻倍。downsample
:如果为none,表示不对输入做操作;如果不为none,其应该是一个conv1x1的卷积层,用来调整输入的维度(因为输出f(x)经过stride修改了输出的维度),方便后面与残差块输出的shortcut。
接下来我们要分析一下残差块是如何搭建起来的:
def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out
可以结合残差块的图分析,就是输入x经过两个3x3卷积层(批量归一化层和relu层此处不提,对结果没有影响),然后判断是否需要对输入x进行一个维度的调整,如果需要,就用downsample去调整;如果不需要,就直接与输出f(x)相加。
-
Bottleneck
首先我们要有一个整体的概念:
每个残差块里面有一个conv3x3卷积层和两个conv1x1卷积层,但这些卷积层的输出通道数都一样。
很多人不理解
expansion
的作用,他其实是在残差模块中只写了一套符合BasicBlock的输出维度数,而Bottleneck的维度数全部都是BasicBlock的4倍,防止了代码冗余。我们先来看看需要接收和初始化哪些参数
def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = conv1x1(inplanes, planes) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = conv3x3(planes, planes, stride) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = conv1x1(planes, planes * self.expansion) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride
和BasicBlock的一样,此处不再叙述。
接下来我们要分析一下残差块是如何搭建起来的:
def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out
根据图应该能看懂,很简单,不再讲述
-
-
残差模块
上面讲了残差块,那么残差模块是由残差块怎么组合的呢?我们接着往下讲。
每个模块使用若干个同样输出通道数的残差块。第一个模块的通道数同输入通道数一致。由于之前已经使用了步幅为2的最大池化层,所以无须减小高和宽。之后的每个模块在第一个残差块里将上一个模块的通道数翻倍,并将高和宽减半。
这是《动手学深度学习》这本书中的原话。也就是说有两种类型的残差模块,一种就是第一个残差模块,另一种就是后三个残差模块。(注意四个残差模块是串联的)
-
残差模块代码分析
残差模块由若干个残差块组成,此种类型的模块里面残差块的通道数和输入的通道数一致
先看代码:
def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( conv1x1(self.inplanes, planes * block.expansion, stride), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers)
老样子,我们先分析参数,
block
:上面我们分析的两种类型的残差块planes
:该模块中所有卷积层的输出通道数。blocks
:残差模块中残差块的个数stride
:调整残差模块第一个残差块的输出的维度,对第一个残差模块没有用,其值为1,后三个残差模块因为通道数要翻倍,所以其第一个残差块的要将其设置为2,来满足翻倍的需求。
接下里我们看处理逻辑,
- 首先要判断需不需要对输入的维度进行调整:
第一个残差模块不需要,后面的三个都需要。
- 单独创建每个残差模块的第一个残差块,因为这个残差块比较特殊:
后三个残差块的通道数要翻倍,即stride要设置为2
- 遍历后面需要创建的残差块,都一样所以可以批量创建。
-
-
残差网络(ResNet)
老样子,我们先来分析参数:
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False)
block
:残差模块,有两种选择layers
:是一个列表,四个元素,分别表示四个模块中残差块的个数。是区分ResNet-x的关键。num_classes
:最后结果输出的维度(类别的个数)zero_init_residual
:
该类是整个ResNet网络的集合,可以分为以下三个部分:
-
输入部分:
# first section self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
残差模块部分:
此部分就是调用上面我们写的残差模块的函数,四个模块的输出通道数分别为:[64, 128, 256, 512],50层及以上的通过block中的
expansion
进行调整。四个模块的残差块数通过参数
layers
决定# second section self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
-
输出部分:
一个全连接层。
# third section self.fc = nn.Linear(512 * block.expansion, num_classes)
下面我提供了集中常用的Resnet的实例化方式,
def resnet18(pretrained=False, **kwargs): """Constructs a ResNet-18 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['resnet18'])) return model def resnet34(pretrained=False, **kwargs): """Constructs a ResNet-34 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['resnet34'])) return model def resnet50(pretrained=False, **kwargs): """Constructs a ResNet-50 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['resnet50'])) return model def resnet101(pretrained=False, **kwargs): """Constructs a ResNet-101 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['resnet101'])) return model def resnet152(pretrained=False, **kwargs): """Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet """ model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['resnet152'])) return model
Resnet.py源码
上面将残差网络代码讲解完了,这里附上代码(resnet.py模版)
```jsx
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = conv1x1(inplanes, planes)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = conv1x1(planes, planes * self.expansion)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 64
# first section
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
# second section
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
# third section
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def resnet18(pretrained=False, **kwargs):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model
def resnet34(pretrained=False, **kwargs):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
return model
def resnet50(pretrained=False, **kwargs):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
def resnet101(pretrained=False, **kwargs):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))
return model
def resnet152(pretrained=False, **kwargs):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))
return model
```
0x03 代码训练使用
上面我们讲解了ResNet网络的代码,但是好多人会有疑问,该怎么用呢?接下来,我将手把手讲解如何使用上面的代码去训练一个Resnet,来达到分类图像的目的。
-
导入我们需要的Resnet网络
# 这里假设上述我们写的resnet.py在networks文件夹下 from networks.resnet import resnet50()
-
实例化一个model
model = resnet50()
-
训练
output = model(x) # 计算loss loss = nn.BCEWithLogitsLoss(output, label) # 根据loss反向传播 loss.backward() # 加载优化器 optimizer = torch.optim.Adam(self.model.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) # 梯度置零 optimizer.zero_grad() # 更新梯度 optimizer.step()
0x04 一些闲话
本人创建了一个公众号,分享科研路上的小问题,新发现,欢迎关注公众号,给我留言!!!
一起奋发向上,攻克难题吧~~