经典卷积模型(一)Resnet代码解析

Resnet

设计起因是随着网络层增加反而梯度下降困难,甚至起到反作用,因此加入残差结构。

残差网络原理就是"正常梯度+消失梯度=正常梯度",只要自身的梯度是正常的,就算加上多层后出现的消失的梯度也是正常的值,这样能够保证梯度正常反向传播。

Resnet设计了两类残差块Basic_block和Bottleneck,分别用于不同层次的Resnet。完整代码

Basic_block

是两层的残差块,用于resnet18/34:在这里插入图片描述
代码部分:

class BasicBlock(nn.Module):
"""
inplane是输入的通道数,plane是输出的通道数,expansion是对输出通道数的倍乘.
"""
    expansion = 1
    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.conv1 = conv3x3(inplanes, planes, stride)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(planes, planes)
        self.bn2 = nn.BatchNorm2d(planes)
        self.downsample = downsample
        self.stride = stride
"""
首先residual=x保存自身的值,然后x经过两个(conv+bn+relu)得到out,再将out+=residual.
"""
    def forward(self, x):
        residual = x
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        out = self.conv2(out)
        out = self.bn2(out)
        if self.downsample is not None:
            residual = self.downsample(x)
        out += residual
        out = self.relu(out)
        return out

Bottleneck

三层的残差块,用于resnet50/101/152:
第一个1x1的卷积层用于降维,第二个3x3层用于处理,第三个1x1层用于升维,这样减少了计算量。

第一个1x1的卷积把256维channel降到64维,然后在最后通过1x1卷积恢复,整体上用的参数数目:1x1x256x64 +3x3x64x64 + 1x1x64x256 = 69632,而不使用bottleneck的话就是两个3x3x256的卷积,参数数目: 3x3x256x256x2 = 1179648,差了16.94倍。

在这里插入图片描述

class Bottleneck(nn.Module):
"""
区别:
使用了3次卷积对通道数进行压缩,再放大:
conv1卷积核是(1*1)[inplanes, planes]
conv2卷积核(3*3)[planes, planes]
conv3卷积核(1*1)[planes, planes * self.expansion] #expansion=4
注意:
Conv2d都将bias=False,因为卷积后都经过BN,归一化后在变换重构部分加的beta 就相当于+bias。
"""
   expansion = 4
   def __init__(self, inplanes, planes, stride=1, downsample=None):
       super(Bottleneck, self).__init__()
       self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
       self.bn1 = nn.BatchNorm2d(planes)
       self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=1, bias=False)
       self.bn2 = nn.BatchNorm2d(planes)
       self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
       self.bn3 = nn.BatchNorm2d(planes * self.expansion)
       self.relu = nn.ReLU(inplace=True)
       self.downsample = downsample
       self.stride = stride
   def forward(self, x):
       residual = x
       out = self.conv1(x)
       out = self.bn1(out)
       out = self.relu(out)
       out = self.conv2(out)
       out = self.bn2(out)
       out = self.relu(out)
       out = self.conv3(out)
       out = self.bn3(out)
       if self.downsample is not None:
           residual = self.downsample(x)
       out += residual
       out = self.relu(out)
       return out

ResNet网络

  1. conv1+bn1+relu+maxpool:(224,224,3)->(112,112,64)->(56,56,64)
    其中conv1卷积核大小7*7,stride=2,p=3,通道从3->64。maxpool为2倍下采样。

      self.inplanes = 64
      self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,bias=False) 
      self.bn1 = nn.BatchNorm2d(64)
      self.relu = nn.ReLU(inplace=True)
      self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) #2倍下采样
    
  2. 然后构造layer1-layer4

    self.layer1 = self._make_layer(block, 64, layers[0])
    self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
    self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
    self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
    

    layers是叠加层数,如resnet50:[3, 4, 6, 3]

    def resnet50(pretrained=False, **kwargs):
        model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
    

    其中_make_layer的构造:

    # block:基础块的类型,是BasicBlock,还是Bottleneck
    # planes:当前块的输入输入通道数
    # blocks:块的数目
    def _make_layer(self, block, planes, blocks, stride=1):
    """
    downsample的作用:
    拼接layers第一步是block(self.inplanes, planes, stride, downsample),其中不管经过Bottleneck
    还是BasicBlock都会有这一步:
    	if self.downsample is not None:
    		residual = self.downsample(x)
    然后有:
    	out += residual
    out在Bottleneck中通道数是inplanes->planes * self.expansion,在BasicBlock是inplanes->planes.
    
    所以downsample作用是保证out+=residual步骤,将输入的特征图和残差结构中卷积操作后的特征尺寸一致
    才能相加.
    self.inplanes->planes*block.expansion,其中BasicBlock.expansion=1,Bottleneck.expansion=4.
    """
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                conv1x1(self.inplanes, planes * block.expansion, stride),
                nn.BatchNorm2d(planes * block.expansion),
            )
    	#拼接blocks个layer
        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for _ in range(1, blocks):
            layers.append(block(self.inplanes, planes))
        return nn.Sequential(*layers)
    
  3. 最后平均池化+全连接.

    self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
    self.fc = nn.Linear(512 * block.expansion, num_classes)
    

完整代码:
我对参数初始化的意义不是很明白!for m in self.modules()和zero_init_residual部分

class ResNet(nn.Module):
   #参数block指明残差块是两层或三层,参数layers指明每个卷积层需要的残差块数量,num_classes指明分类数,zero_init_residual是否初始化为0
   def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
       super(ResNet, self).__init__()
       self.inplanes = 64
       self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,bias=False) 
       self.bn1 = nn.BatchNorm2d(64)
       self.relu = nn.ReLU(inplace=True)
       self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) #2倍下采样
       self.layer1 = self._make_layer(block, 64, layers[0])
       self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
       self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
       self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
       self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
       self.fc = nn.Linear(512 * block.expansion, num_classes)
       #对模型卷积层参数以及BN层参数的初始化操作
       for m in self.modules():
           if isinstance(m, nn.Conv2d):
               #kaiming高斯初始化,目的是使得Conv2d卷积层反向传播的输出的方差都为1
               nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
           elif isinstance(m, nn.BatchNorm2d):
               #初始化m.weight,即gamma的值为1;m.bias即beta的值为0
               nn.init.constant_(m.weight, 1)
               nn.init.constant_(m.bias, 0)
       # 在每个残差分支中初始化最后一个BN,即BatchNorm2d
       # 以便残差分支以零开始,并且每个残差块的行为类似于一个恒等式。
       if zero_init_residual:
           for m in self.modules():
               if isinstance(m, Bottleneck):#Bottleneck的最后一个BN是m.bn3
                   nn.init.constant_(m.bn3.weight, 0)
               elif isinstance(m, BasicBlock):#BasicBlock的最后一个BN是m.bn2
                   nn.init.constant_(m.bn2.weight, 0)
   def _make_layer(self, block, planes, blocks, stride=1):
       downsample = None
       if stride != 1 or self.inplanes != planes * block.expansion:
           downsample = nn.Sequential(
               conv1x1(self.inplanes, planes * block.expansion, stride),
               nn.BatchNorm2d(planes * block.expansion),
           )
       layers = []
       layers.append(block(self.inplanes, planes, stride, downsample))
       self.inplanes = planes * block.expansion
       for _ in range(1, blocks):
           layers.append(block(self.inplanes, planes))
       return nn.Sequential(*layers)
   def forward(self, x):
       x = self.conv1(x)
       x = self.bn1(x)
       x = self.relu(x)
       x = self.maxpool(x)
       x = self.layer1(x)
       x = self.layer2(x)
       x = self.layer3(x)
       x = self.layer4(x)
       x = self.avgpool(x)
       x = x.view(x.size(0), -1)
       x = self.fc(x)
       return x

不同层次网络实现

在这里插入图片描述

#18层的resnet
def resnet18(pretrained=False, **kwargs):
    model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
    if pretrained:#是否使用已经训练好的预训练模型,在此基础上继续训练
        model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
    return model
#34层的resnet
def resnet34(pretrained=False, **kwargs):
    model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
    if pretrained:#是否使用已经训练好的预训练模型,在此基础上继续训练
        model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
    return model
#50层的resnet
def resnet50(pretrained=False, **kwargs):
    model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
    if pretrained:#是否使用已经训练好的预训练模型,在此基础上继续训练
        model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
    return model
#101层的resnet
def resnet101(pretrained=False, **kwargs):
    model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
    if pretrained:#是否使用已经训练好的预训练模型,在此基础上继续训练
        model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))
    return model
#152层的resnet
def resnet152(pretrained=False, **kwargs):
    model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
    if pretrained:#是否使用已经训练好的预训练模型,在此基础上继续训练
        model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))
    return model

参考

https://www.cnblogs.com/wanghui-garcia/p/10775860.html
https://www.jiqizhixin.com/articles/042201
https://zhuanlan.zhihu.com/p/77899090

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

nooobme

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值