如何理解残差网络(resnet)结构和代码实现(Pytorch)笔记分享

在深度学习的网络中,个人认为最基础的还是残差网络,今天分享的并不是残差网络的理论部分,大家只要记住一点,残差网络的思想是贯穿后面很多网络结构之中,看懂了残差网络结构,那么后面的一些先进的网络的结构也很容易看懂。

残差网络整体结构 

一、残差块结构

 前50层所对应的残差块结构(不包含第50层)代码如下:

class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):#downsample=None表示虚线的残差结构
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        out += identity
        out = self.relu(out)

        return out

 后50层所对应的残差块结构(包含第50层)代码如下:

class Bottleneck(nn.Module):
    """
    注意:原论文中,在虚线残差结构的主分支上,第一个1x1卷积层的步距是2,第二个3x3卷积层步距是1。
    但在pytorch官方实现过程中是第一个1x1卷积层的步距是1,第二个3x3卷积层步距是2,
    这么做的好处是能够在top1上提升大概0.5%的准确率。
    可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch
    """
    expansion = 4

    def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                 groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()

        width = int(out_channel * (width_per_group / 64.)) * groups

        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                               kernel_size=1, stride=1, bias=False)  # squeeze channels
        self.bn1 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,
                               kernel_size=1, stride=1, bias=False)  # unsqueeze channels
        self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += identity
        out = self.relu(out)

        return out

看到上面的两个残差块,初学者兴许会觉得疑惑,或者看到代码会疑惑,为什么有两种残差块呢?、其实这两种残差块是针对不同网络层数的,第一个残差结构是针对浅层的残差网络的,比如resnet18,resnet34,而第二个残差结构是针对深层的残差结构的,比如resnet50,resnet101,resnet152。

、在代码中会分别实现这两种残差块,为的就是方便更改网络的层数。对于残差块结构,一般的网络总是命名成Block。所以看代码使,要对着图来看。

其次需要注意的是,3x3卷积核一般用于降低特征图大小的,1x1卷积一般用于降低或者增加通道数的。

 二、concat和add的区别

对于初学者,看到这两个单词还是比较迷的,又或者没法理解。所以这点要注意一下,

 concat操作:一般需要特征图的大小相同,才能在对应的通道维度上拼接,比如说下图所示:

  add操作:一般需要特征图大小和通道数相同,比如下图左边两个图都是特征大小为2x2,通道数为1的,所以二者能够在对应位置相加。

 三、为什么残差边需要进行下采样

如下图。 你会发现上面的两个残差块的其中一天残差边并没有下图的1x1,128的样式,只能告诉你,这是作者默认你已经入门深度学习了,所以才没有写,我们仔细分析下面的图,首先[56,56,64]经过3x3,128,步长为2的卷积核,会变成[28,28,128],再经过3x3,128,步长为1的卷积核,会变成[28,28,128],但是却和输入的[56,56,64]大小和通道数不一致,所以[56,56,64]在残差边上进行一次3x3,128,步长为2的卷积核,从而也能得到[28,28,128],最后两个[28,28,128]进行相加。

代码如下: 

import torch.nn as nn
import torch
class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):#downsample=None表示虚线的残差结构
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)
        self.downsample = nn.Conv2d(in_channels=in_channel,out_channels=out_channel,kernel_size=1,stride=2)

        self.bn3 = nn.BatchNorm2d(out_channel)

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity =self.relu(self.bn3(self.downsample(x)))

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        out += identity
        out = self.relu(out)

        return out

if __name__ == '__main__':
    a=torch.randn((1,64,56,56))
    model=BasicBlock(in_channel=64,out_channel=128,stride=2,downsample=True)
    out=model(a)
    print(out.shape)

 完整resnet网络代码如下:

import torch.nn as nn
import torch

#下面的类是3x3 3x3的残差结构
class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):#downsample=None表示虚线的残差结构
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        out += identity
        out = self.relu(out)

        return out

#这个表示后面50层的残差结构
class Bottleneck(nn.Module):
    """
    注意:原论文中,在虚线残差结构的主分支上,第一个1x1卷积层的步距是2,第二个3x3卷积层步距是1。
    但在pytorch官方实现过程中是第一个1x1卷积层的步距是1,第二个3x3卷积层步距是2,
    这么做的好处是能够在top1上提升大概0.5%的准确率。
    可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch
    """
    expansion = 4

    def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                 groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()

        width = int(out_channel * (width_per_group / 64.)) * groups

        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                               kernel_size=1, stride=1, bias=False)  # squeeze channels
        self.bn1 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,
                               kernel_size=1, stride=1, bias=False)  # unsqueeze channels
        self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += identity
        out = self.relu(out)

        return out


class ResNet(nn.Module):

    def __init__(self,
                 block,
                 blocks_num,  #对于34层  3,4,6,3
                 num_classes=1000,
                 include_top=True,#为了能够搭建更加复杂的网络
                 groups=1,
                 width_per_group=64):
        super(ResNet, self).__init__()
        self.include_top = include_top
        self.in_channel = 64

        self.groups = groups
        self.width_per_group = width_per_group

        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                               padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))

        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion

        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)

        return x


def resnet34(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet34-333f7ec4.pth
    return ResNet(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)


def resnet50(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet50-19c8e357.pth
    return ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)


def resnet101(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnet101-5d3b4d8f.pth
    return ResNet(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)


def resnext50_32x4d(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth
    groups = 32
    width_per_group = 4
    return ResNet(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)


def resnext101_32x8d(num_classes=1000, include_top=True):
    # https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth
    groups = 32
    width_per_group = 8
    return ResNet(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)

if __name__ == '__main__':
    net=resnet34()
    print(net)

至此网络结构说明完成!希望大家有所收获,有什么疑问的地方,欢迎大家评论!

### 回答1: 可以使用PyTorch构建一个带有残差结构的VGG网络。可以自定义网络的结构,并在PyTorch的`nn`模块中实现残差块。 首先,我们可以定义一个残差块类,其中包含两个卷积层,一个快速批量归一化层和一个ReLU激活函数: ``` import torch.nn as nn class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1): super(ResidualBlock, self).__init__() self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1) self.bn1 = nn.BatchNorm2d(out_channels) self.relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) self.bn2 = nn.BatchNorm2d(out_channels) def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out += residual out = self.relu(out) return out ``` 然后,我们可以定义一个VGG网络类,其中包含若干个残差块和池化层: ``` class VGGWithResidual(nn.Module): def __init__(self, num_classes=10): super(VGGWithResidual, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.layer1 = self._make_layer(64, 128, 2) self.layer2 = self._make_layer(128, 256, 2) self.layer3 = self._make_layer(256, 512, 2) self.avg_pool = nn.AdaptiveAvgPool2d(( ### 回答2: 构建VGG网络时,我们可以在每一个卷积层后加入残差结构残差结构是由残差块组成的,每个残差块包含两个卷积层。 首先导入所需的库和模块: ```python import torch import torch.nn as nn import torchvision.models as models ``` 定义残差块类: ```python class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels): super(ResidualBlock, self).__init__() self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1) self.relu = nn.ReLU() self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1) def forward(self, x): residual = x out = self.relu(self.conv1(x)) out = self.conv2(out) out += residual # 残差连接 out = self.relu(out) return out ``` 定义VGG网络类,其中包含残差结构: ```python class VGG(nn.Module): def __init__(self): super(VGG, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.ReLU(), ResidualBlock(64, 64), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(64, 128, kernel_size=3, padding=1), nn.ReLU(), ResidualBlock(128, 128), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(128, 256, kernel_size=3, padding=1), nn.ReLU(), ResidualBlock(256, 256), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(256, 512, kernel_size=3, padding=1), nn.ReLU(), ResidualBlock(512, 512), nn.MaxPool2d(kernel_size=2, stride=2), nn.Conv2d(512, 512, kernel_size=3, padding=1), nn.ReLU(), ResidualBlock(512, 512), nn.MaxPool2d(kernel_size=2, stride=2), ) self.avgpool = nn.AdaptiveAvgPool2d((7, 7)) self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(), nn.Linear(4096, 4096), nn.ReLU(), nn.Linear(4096, 1000), nn.Softmax(dim=1) ) def forward(self, x): out = self.features(x) out = self.avgpool(out) out = torch.flatten(out, 1) out = self.classifier(out) return out ``` 实例化VGG网络模型: ```python vgg = VGG() ``` 接下来就可以使用该VGG网络模型进行训练和测试了。 ### 回答3: 构建一个包含残差结构的VGG网络的步骤如下: 1. 引入必要的库和模块:首先需要引入PyTorch库,以及其他需要使用的模块,如torch.nn和torch.nn.functional。 2. 定义卷积块:根据VGG网络的结构,定义一个卷积块函数,该函数将包含多个卷积层和池化层。 3. 定义VGG网络:根据VGG网络的结构,定义一个包含多个卷积块的VGG网络类。在其中,使用残差结构可将卷积块的输出与输入相加。 4. 实例化VGG网络:通过实例化VGG网络类,创建一个VGG网络的模型。 5. 定义损失函数和优化器:选择适当的损失函数和优化器来训练网络。 6. 训练模型:使用训练数据集对模型进行训练,并在每个训练周期计算损失和准确率。 7. 评估模型:使用验证数据集对训练好的模型进行评估,并计算准确率。 8. 进行预测:使用测试数据集对模型进行预测,并输出预测结果。 以下是一个使用PyTorch实现的构建包含残差结构的VGG网络的代码示例: ```python import torch import torch.nn as nn import torch.nn.functional as F # 定义卷积块 def conv_block(in_channels, out_channels, kernel_size=3, stride=1, padding=1): return nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels, kernel_size, stride, padding), nn.ReLU(inplace=True) ) # 定义VGG网络 class VGG(nn.Module): def __init__(self, num_classes=10): super(VGG, self).__init__() self.block1 = conv_block(3, 64) self.block2 = conv_block(64, 128) self.block3 = conv_block(128, 256) self.fc = nn.Linear(256, num_classes) def forward(self, x): residual = x out = self.block1(x) out += residual # 残差结构 residual = out out = self.block2(out) out += residual # 残差结构 residual = out out = self.block3(out) out += residual # 残差结构 out = F.avg_pool2d(out, 8) out = out.view(out.size(0), -1) out = self.fc(out) return out # 创建VGG网络的实例 model = VGG() # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) # 训练模型 def train(model, criterion, optimizer, train_loader): model.train() for inputs, labels in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 评估模型 def evaluate(model, criterion, test_loader): model.eval() with torch.no_grad(): correct = 0 total = 0 for inputs, labels in test_loader: outputs = model(inputs) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total return accuracy # 进行预测 def predict(model, test_loader): model.eval() with torch.no_grad(): predictions = [] for inputs in test_loader: outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) predictions.extend(predicted.tolist()) return predictions ``` 以上是一个使用PyTorch实现的包含残差结构的VGG网络的简单示例,可以根据需要进行修改和扩展。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值