深度学习:ResNet网络与bottleneck架构

目录

1. ResNet简介

ResNet的特点

近年成就

2. ResNet-18

ResNet-18的结构

代码实现

3. Bottleneck  

Bottleneck的结构

代码实现

4. ResNet-50

ResNet-50的结构

代码实现

5. 实验结果对比与分析

ResNet-18

 ResNet-50

6. 总结

1. ResNet简介

ResNet的特点

        ResNet(Residual Neural Network)是由何恺明等人在2015年提出的一种深度卷积神经网络。它的核心思想是引入残差块(Residual Block),解决了深度神经网络中常见的梯度消失和梯度爆炸问题,从而使得网络可以更深。ResNet的主要特点包括:

  • 残差学习:通过引入恒等映射(Identity Mapping),使网络能够更容易地学习到恒等函数,从而减轻梯度消失问题。
  • 跳跃连接:残差块通过跳跃连接(Skip Connection)直接将输入传递到输出,提高了信息传递的效率。
  • 更深的网络:由于残差块的引入,ResNet可以在保持较低误差的情况下构建更深的网络,如ResNet-50、ResNet-101等。
近年成就

        自提出以来,ResNet在各种计算机视觉任务中取得了显著成就,包括图像分类、目标检测和语义分割等。ResNet的成功激发了许多后续研究工作,如Wide ResNet、ResNeXt等,这些变体在不同的应用场景中进一步提升了性能。

        本文将从基本的ResNet-18出发,进一步实现ResNet-50。

2. ResNet-18

        ResNet-18是ResNet系列中较浅的一种网络结构,包含18个卷积层和全连接层。它使用基本的残差块(Residual Block),每个残差块包含两个3x3的卷积层。

ResNet-18的结构
  • Conv1: 7x7卷积层,输出通道为64
  • MaxPool: 3x3最大池化层
  • Layer1: 2个残差块,每个块包含2个卷积层,总共4个卷积层
  • Layer2: 2个残差块,每个块包含2个卷积层,总共4个卷积层
  • Layer3: 2个残差块,每个块包含2个卷积层,总共4个卷积层
  • Layer4: 2个残差块,每个块包含2个卷积层,总共4个卷积层
  • AvgPool: 全局平均池化层
  • FC: 全连接层,输出10个类别
代码实现
# 残差块的实现
class Residual(nn.Module):  #@save
    def __init__(self, input_channels, num_channels, use_1x1conv=False, strides=1):
        super().__init__()
        self.conv1 = nn.Conv2d(input_channels, num_channels, kernel_size=3, padding=1, stride=strides)
        self.conv2 = nn.Conv2d(num_channels, num_channels, kernel_size=3, padding=1)
        if use_1x1conv:
            self.conv3 = nn.Conv2d(input_channels, num_channels, kernel型=1, stride=strides)
        else:
            self.conv3 = None
        self.bn1 = nn.BatchNorm2d(num_channels)
        self.bn2 = nn.BatchNorm2d(num_channels)

    def forward(self, X):
        Y = F.relu(self.bn1(self.conv1(X)))
        Y = self.bn2(self.conv2(Y))
        if self.conv3:
            X = self.conv3(X)
        Y += X
        return F.relu(Y)

# ResNet模型
b1 = nn.Sequential(
    nn.Conv2d(1, 64, kernel型=7, stride=2, padding=3),  # 第一个卷积层:输入通道1,输出通道64,卷积核7x7,步幅2,填充3
    nn.BatchNorm2d(64),                                   # 批量归一化层
    nn.ReLU(),                                            # ReLU激活函数
    nn.MaxPool2d(kernel型=3, stride=2, padding=1)      # 最大池化层:池化核3x3,步幅2,填充1
)

# 每个模块在第一个残差块里将上一个模块的通道数翻倍,并将高和宽减半
def resnet_block(input_channels, num_channels, num_residuals, first_block=False):
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(Residual(input_channels, num_channels, use_1x1conv=True, strides=2))
        else:
            blk.append(Residual(num_channels, num_channels))
    return blk

b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))
b3 = nn.Sequential(*resnet_block(64, 128, 2))
b4 = nn.Sequential(*resnet_block(128, 256, 2))
b5 = nn.Sequential(*resnet_block(256, 512, 2))

net = nn.Sequential(b1, b2, b3, b4, b5, nn.AdaptiveAvgPool2d((1,1)), nn.Flatten(), nn.Linear(512, 10))

        在ResNet-18中,每个残差块的设计都非常简单,包含两个3x3的卷积层,并且在必要时通过1x1卷积层调整通道数和分辨率。这使得ResNet-18不仅易于实现,而且计算效率较高。 

3. Bottleneck  

Bottleneck的结构

        在ResNet-50及更深的ResNet变体中,使用了Bottleneck Block(瓶颈块)来代替基本的残差块。瓶颈块包含三个卷积层:1x1、3x3和1x1。这样设计的主要目的是减少计算量和参数数量,同时保持网络的表示能力。

  • 1x1卷积层:用于减少或增加通道数,降低计算复杂度。1x1卷积层可以通过改变通道数来减少3x3卷积层的计算量。
  • 3x3卷积层:用于提取空间特征。3x3卷积层在瓶颈块中起到了核心作用,用于提取图像中的细节信息。
  • 1x1卷积层:用于恢复通道数。最后的1x1卷积层将通道数恢复到原始大小,以便于与输入进行相加。

        这种设计使得每个瓶颈块中的计算量显著降低,同时维持了网络的深度和表达能力。

代码实现
class Bottleneck(nn.Module):
    expansion = 4
    
    def __init__(self, in_channel, out_channel, stride=1, downsample=None, groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()
        
        width = int(out_channel * (width_per_group / 64.)) * groups
        
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width, kernel型=1, stride=1, bias=False)
        self.bn1 = nn.BatchNorm2d(width)
        
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups, kernel型=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion, kernel型=1, stride=1, bias=False)
        self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
        
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
    
    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)
        
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        
        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)
        
        out = self.conv3(out)
        out = self.bn3(out)
        
        out += identity
        out = self.relu(out)
        
        return out

4. ResNet-50

ResNet-50的结构

        ResNet-50使用Bottleneck Block,每个瓶颈块包含三个卷积层:1x1、3x3和1x1。整个网络包含50个卷积层和全连接层。

  • Conv1: 7x7卷积层,输出通道为64
  • MaxPool: 3x3最大池化层
  • Layer1: 3个瓶颈块,每个块包含3个卷积层,总共9个卷积层
  • Layer2: 4个瓶颈块,每个块包含3个卷积层,总共12个卷积层
  • Layer3: 6个瓶颈块,每个块包含3个卷积层,总共18个卷积层
  • Layer4: 3个瓶颈块,每个块包含3个卷积层,总共9个卷积层
  • AvgPool: 全局平均池化层
  • FC: 全连接层,输出10个类别
代码实现
class ResNet(nn.Module):
    def __init__(self, block, blocks_num, num_classes=1000, include_top=True, groups=1, width_per_group=64):
        super(ResNet, self).__init__()
        self.include_top = include_top
        self.in_channel = 64
        
        self.groups = groups
        self.width_per_group = width_per_group
        
        self.conv1 = nn.Conv2d(1, self.in_channel, kernel型=7, stride=2, padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel型=3, stride=2, padding=1)
        
        self.layer1 = self._make_layer(block, 64, blocks_num[0])
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
        
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            self.fc = nn.Linear(512 * block.expansion, num_classes)
        
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
    
    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1或self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel型=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))
        
        layers = []
        layers.append(block(self.in_channel, channel, downsample=downsample, stride=stride, groups=self.groups, width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion
        for _ in range(1, block_num):
            layers.append(block(self.in_channel, channel, groups=self.groups, width_per_group=self.width_per_group))
        return nn.Sequential(*layers)
    
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
        
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        
        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)
        
        return x


def resnet50(num_classes=10, include_top=True):
    return ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)

net = resnet50(num_classes=10, include_top=True)

5. 实验结果对比与分析

        在FashionMNIST数据集上分别训练ResNet-18、ResNet-50:

ResNet-18
lr, num_epochs, batch_size = 0.05, 10, 256
train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())

 ResNet-50
lr, num_epochs, batch_size = 0.05, 10, 128
train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())

训练损失训练准确率测试准确率训练速度
ResNet-180.0170.9950.9162670.4
ResNet-500.0750.9730.914903.8

         从实验结果可以看出,虽然ResNet-50比ResNet-18更深,但是在FashionMNIST数据集上表现相差不大。这可能是因为FashionMNIST数据集相对简单,较浅的网络已经足够提取有效特征。

6. 总结

        通过这次学习,我们了解了ResNet的基本结构和原理,探索了ResNet-18和ResNet-50的实现和区别。ResNet通过引入残差块解决了深度网络中的梯度消失问题,使得网络可以更深。Bottleneck架构进一步优化了网络的计算效率,使得ResNet在保证性能的前提下减少了计算量和参数数量。

        在实际应用中,选择合适的ResNet变体需要根据具体任务和数据集的复杂度来决定。对于简单任务,ResNet-18可能已经足够,而对于更复杂的任务,ResNet-50甚至更深的网络可能会带来更好的性能。

        这篇笔记记录了ResNet的学习过程和实验结果,希望对日后的研究和应用有所帮助。

ResNet with bottleneck是一种用于构建深度残差网络的瓶颈结构。它通过在每个残差块中引入一个瓶颈层来减少计算量,并提高网络的性能。瓶颈层由一个1x1的卷积层、一个3x3的卷积层和一个1x1的卷积层组成。这种结构可以有效地减少参数数量,并提高网络的表达能力。 以下是一个使用ResNet with bottleneck的示例代码: ```python import torch import torch.nn as nn class Bottleneck(nn.Module): expansion = 4 def __init__(self, in_channels, out_channels, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(out_channels) self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channels) self.conv3 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(out_channels * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = nn.Sequential( nn.Conv2d(in_channels, out_channels * self.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(out_channels * self.expansion) ) def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000): super(ResNet, self).__init__() self.in_channels = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, out_channels, blocks, stride=1): layers = [] layers.append(block(self.in_channels, out_channels, stride)) self.in_channels = out_channels * block.expansion for _ in range(1, blocks): layers.append(block(self.in_channels, out_channels)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x # 构建ResNet-50模型 model = ResNet(Bottleneck, [3, 4, 6, 3]) print(model) ``` 这段代码演示了如何使用ResNet with bottleneck构建一个ResNet-50模型。该模型由多个残差块组成,每个残差块都使用了瓶颈结构。你可以根据需要修改模型的层数和输出类别数量。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值