ResNet改进之ResNext学习笔记 (附代码)

前言

论文地址:https://arxiv.org/abs/1611.05431

代码地址:https://gitcode.net/mirrors/facebookresearch/ResNeXt?utm_source=csdn_github_accelerator

1.是什么

何恺明大神的又一经典之作: ResNeXt(《Aggregated Residual Transformations for Deep Neural Networks》)。这个网络可以被解释为 VGG、ResNet 和 Inception 的结合体,它通过重复多个block(如在 VGG 中)块组成,每个block块聚合了多种转换(如 Inception),同时考虑到跨层连接(来自 ResNet)。

ResNeXt就是一种典型的混合模型,由基础的Inception+ResNet组合而成,本质在gruops分组卷积,核心创新点就是用一种平行堆叠相同拓扑结构的blocks代替原来 ResNet 的三层卷积的block,在不明显增加参数量级的情况下提升了模型的准确率,同时由于拓扑结构相同,超参数也减少了,便于模型移植。
 

2.为什么?

设计架构越来越困难,越来越多的超参数(宽度2,过滤尺寸,步幅等),尤其是当有许多层时。 VGG-Net [36]展示了构建非常深网络的简单而有效的策略:堆叠相同shape的组件。该策略继承于Resnet[14],Resnet堆叠了相同拓扑的模块。这个简单的规则减少了超参数的自由选择,深度揭示了神经网络中的基本维度。此外,我们认为这条规则的简单性可能会降低过度调整超参数到特定数据集的风险。通过各种视觉识别任务[7,10,9,28,31,14]并通过涉及语音[42,30]和语言[4,41,20]的非视觉任务已经被证明的VGGNET和RESNET的鲁棒性。
与VGG-nets不同,Inception模型系列[38,17,39,37]已经证明,精心设计的有着较低理论复杂性的拓扑结构足以实现令人信服的精确率。随着时间的推移,Inception模型已经进化[38,39],但重要的常见属性是分裂变换合并策略。在Inception模块中,输入被分成几维低维的输入(按1×1卷积),由一组专用过滤器(3×3,5×5等)转换,并通过连接合并。可以表明,这种架构的解决方案是在高维输入的单个大网络层(例如,5×5)的解空间的严格子空间。预计Inception模块的分流变换合并行为将接近大而致密层的代表性,但是具有相当较低的计算复杂性。

尽管准确性良好,但Inception模型的实现已经伴随着一系列复杂的因素——过滤器数量和大小是为每个单独的变换定制的并且模块是分阶段自定义的。虽然这些组件的仔细组合产生了优秀的神经网络结构,但通常不清楚如何将初始化架构调整到新的数据集/任务,特别是当有许多因素和超参数时要设计。

本文提出了一个简单架构,采用 VGG/ResNets 重复相同网络层的策略,以一种简单可扩展的方式延续Split-Transform-Merge 策略。将ResNet中高维特征图分组为多个相同的低维特征图,然后在卷积操作之后,将多组结构进行求和,最后得到ResNeXt模型。

Inception系列:采用多分支结构Split- Transform-Merge(分割-变换-聚合)

1) Split:将向量x分成低维嵌入表示;(由1x1卷积降维)

2) Transform:每个低维特征经过一个线性变换;(再由3x3或者5x5的卷积进一步提取特征)

3) Merge:通过单位加合成最后的输出;(最后拼接各分支的特征)

        不足:但是每个映射变换要量身定制卷积核数量、尺寸,模块在每一阶段都要改变。尤其将 Inception 模型用于新的数据或者任务时如何修改并不清晰。

3.怎么样?

3.1模型

我们采用高度模块化的设计,遵循VGG/ResNets.我们的网络由堆叠的残差块组成。这些块具有相同的拓扑,并受到VGG / Resnets的两个简单规则的影响:(i)如果产生相同大小的空间映射,则该块共享相同的超参数(宽度和滤波器大小),以及(II )每次当空间映射下采样因子为2时,块的宽度就乘以2。第二条规则确保计算复杂性,按照浮点操作(浮点操作中,在#中乘法添加)对于所有块大致相同。
通过这两个规则,我们只需要设计模板模块,并且可以相应地确定网络中的所有模块。

模型设计两个原则:

(1)如果输出的空间尺寸一样,那么模块的超参数(宽度和卷积核尺寸)也是一样的。

(2)每当空间分辨率/2(降采样),则卷积核的宽度*2。这样保持模块计算复杂度。

作用:

这两条规则大大缩小了设计空间,让我们可以专注于几个关键因素。

3.2 回顾单个神经元

人工神经网络中最简单的神经元执行内积(加权和),这是由全连通和卷积层完成的初等变换。内积可以看作是一种聚集变换形式:


其中x = [ x1 , x2 , … , xD ]是神经元的一个具有D通道的输入向量,并且wi 是一个卷积核第i通道权重。这个操作(通常包括一些输出非线性)被称为“神经元”。参见图2。


 可以将上述操作理解为分裂、变换和聚合的组合。
(i) 分割: 将向量x切为低维输入,其中为单维子空间xi 
(ii)变换:对低维表示进行变换,并对其进行简单缩放:wixi 
(iii)聚合:所有输入的变换由公式(1)聚合

3.3 聚合变换

对于一个ResNeXt Block中的基数块输出可以表示为:

其中,参数C 代表基数块的数目,Ti 代表对应的基数块,将x投影到一个(可选的低维)集成中,然后对其进行变换。

这拓展了VGG设计原则:从重复相同大小的层,到重复相同拓扑的卷积核组。

在本文中,我们考虑一种设计变换函数的简单方法:所有的Ti都有相同的拓扑结构。

在这种情况下,每个Ti中的第一个1×1层产生低维嵌入。那么对应的残差网络输出就可以被表示为:


 

 图像表示

具体操作

Splitting: 通过1×1卷积实现低维嵌入,256个通道变成4个通道,总共32个分支(cardinality = 32)

Transforming: 每个分支进行变换(对网络层对数据操作)

Aggregating: 对32个分支得到的变换结果—特征图,进行聚合

Block的三种等效形式
(a)表示先划分,单独卷积并计算输出,最后输出相加。split-transform-merge三阶段形式

(b)表示先划分,单独卷积,然后拼接再计算输出。将各分支的最后一个1×1卷积聚合成一个卷积。

(c)就是分组卷积。将各分支的第一个1×1卷积融合成一个卷积,3×3卷积采用group(分组)卷积的形式,分组数=cardinality(基数)

(c)结构分析

(1)首先通过一个1x1的卷积层进行降维处理,将它的channel从256降低到128,

(2)然后在通过group卷积进行处理,这里group 卷积的卷积核为3x3它的groups数为32,它所输出的channel也是等于128,

(3)接着通过1x1卷积对它进行升维

(4)最后将它的输出与我们的输入进行相加得到最后输出。


三个结构为什么等价?
(1)b和c等价

第一层:

过程:首先从(b)到(c)这个过程,对于(b)中第一层通过包括32个分支,每个分支(path)卷积核个数为4的1x1卷积,对于每个path而言它的卷积核大小都是1x1,channel为256,又由于我们path的个数为32,就可以简单将他们合并在一起,变为( c)图中第一层了。

参数:

(b)第一层 256×1×1×4×32=32768

(c)第一层 256×1×1×128=32768

第二层:

过程:和group卷积其实是一样的,对于每个path可以理解为一个group,每个组的输入输出channel为原来的1/group,对于每个组采用3x3的卷积核,卷积之后将特征矩阵进行concate拼接,所以图(b)第二层也是与图(c)第二层 group为32的组卷积也是等价的。

参数:

(b)第二层 4×3×3×4×32=4608

(c)第二层 128/32×3×3×128 = 4608

(2)a和b等价

过程:在(a)中,4维特征图通过1x1卷积变为256维,然后32个256维数据求和,而在(b)中,是先将4维数据concat成128维,在利用1x1卷积,实际上也就是求和过程。

参数:

(a)第三层 4×1×1×256×32=32768

(b)第三层 128×1×1×256 = 32768
 


 分组卷积

在AlexNet时就曾提出,由受限于当时硬件的限制,作者不得不将卷积操作拆分到两台GPU上运行,这两台GPU的参数是不共享的。 两组卷积核学习两种不同的特征,一组学习纹理,另一组学习色彩。

操作:

分组卷积层中,输入和输出的 channels 被分为 C 个 groups,分别对每个 group 进行卷积操作。

优点:

(1)减少参数量,分成G组,则该层的参数量减为原来的1/G。

(2)让网络学习到不同的特征,每组卷积学习到的特征不一样,获得更丰富的信息。

(3)分组卷积可以看做是对原来的特征图进行了一个dropout,有正则的效果。

3.4 代码实现

(1)BasicBlock模块

基础Block模块,也就是对应18/34层的BasicBlock,这里实现和ResNet一样。

'''-------------一、BasicBlock模块-----------------------------'''
# 用于ResNet18和ResNet34基本残差结构块
class BasicBlock(nn.Module):
    def __init__(self, in_channel, out_channel, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.left = nn.Sequential(
            nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.ReLU(),
            nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.downsample(downsample)
        )
 
    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)
 
        out = self.left(x)  # 这是由于残差块需要保留原始输入
        out += identity  # 这是ResNet的核心,在输出上叠加了输入x
        out = F.relu(out)
        return out
(2)Bottleneck模块


从表中可以看出,ResNeXt网络每一个convx的第一层和第二层卷积的卷积核个数是ResNet网络的两倍,在代码实现时,需要注意在代码中增加一下两个参数groups和width_per_group(即为group数和conv2中组卷积每个group的卷积核个数)并且根据这两个参数计算出第一层卷积的输出(为ResNet网络的两倍)。
 

'''-------------二、Bottleneck模块-----------------------------'''
class Bottleneck(nn.Module):
 
    expansion = 4
 
    # 这里相对于RseNet,在代码中增加一下两个参数groups和width_per_group(即为group数和conv2中组卷积每个group的卷积核个数)
    # 默认值就是正常的ResNet
    def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                 groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()
        # 这里也可以自动计算中间的通道数,也就是3x3卷积后的通道数,如果不改变就是out_channels
        # 如果groups=32,with_per_group=4,out_channels就翻倍了
        width = int(out_channel * (width_per_group / 64.)) * groups
 
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                               kernel_size=1, stride=1, bias=False)
        self.bn1 = nn.BatchNorm2d(width)
        # -----------------------------------------
        # 组卷积的数,需要传入参数
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion,
                               kernel_size=1, stride=1, bias=False)
        self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
        # -----------------------------------------
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
 
    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)
 
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
 
        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)
 
        out = self.conv3(out)
        out = self.bn3(out)
 
        out += identity  # 残差连接
        out = self.relu(out)
 
        return out
(3)搭建ResNeXt网络结构

网络整体结构

根据(c)模块,首先通过1x1的卷积层将输入特征矩阵的channel从256降维到128;再通过3x3的32组group卷积对其进行处理;再通过1x1的卷积层进行将特征矩阵的channel从128升维到256;最后主分支与短路连接的输出进行相加得到最终输出。

'''-------------三、搭建ResNeXt结构-----------------------------'''
class ResNeXt(nn.Module):
    def __init__(self,
                 block,  # 表示block的类型
                 blocks_num,  # 表示的是每一层block的个数
                 num_classes=1000,  # 表示类别
                 include_top=True,  # 表示是否含有分类层(可做迁移学习)
                 groups=1,  # 表示组卷积的数
                 width_per_group=64):
        super(ResNeXt, self).__init__()
        self.include_top = include_top
        self.in_channel = 64
 
        self.groups = groups
        self.width_per_group = width_per_group
 
        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                               padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])           # 64 -> 128
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)# 128 -> 256
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)# 256 -> 512
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2) # 512 -> 1024
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)
 
 
 
    # 形成单个Stage的网络结构
    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))
        # 该部分是将每个blocks的第一个残差结构保存在layers列表中。
        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion  # 得到最后的输出
 
        # 该部分是将每个blocks的剩下残差结构保存在layers列表中,这样就完成了一个blocks的构造。
        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group))
 
         # 返回Conv Block和Identity Block的集合,形成一个Stage的网络结构
        return nn.Sequential(*layers)
 
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
 
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
 
        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)
 
        return x

搭建网络模型

使用时直接调用每种不同层的结构对应的残差块作为参数传入。除了残差块不同以外,每个残差块重复的次数也不同,所以也作为参数。每个不同的模型只需往ResNet模型中传入不同参数即可。

def ResNet34(num_classes=1000, include_top=True):
 
    return ResNeXt(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
 
 
def ResNet50(num_classes=1000, include_top=True):
 
    return ResNeXt(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
 
 
def ResNet101(num_classes=1000, include_top=True):
 
    return ResNeXt(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)
 
 
# 论文中的ResNeXt50_32x4d
def ResNeXt50_32x4d(num_classes=1000, include_top=True):
 
    groups = 32
    width_per_group = 4
    return ResNeXt(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)
 
 
def ResNeXt101_32x8d(num_classes=1000, include_top=True):
 
    groups = 32
    width_per_group = 8
    return ResNeXt(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)
(4)测试网络模型

网络模型测试并打印论文中的ResNeXt50_32x4d

if __name__ == '__main__':
    model = ResNeXt50_32x4d()
    print(model)
    input = torch.randn(1, 3, 224, 224)
    out = model(input)
    print(out.shape)
# test()

打印模型如下

ResNeXt(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=1000, bias=True)
)
torch.Size([1, 1000])
 
Process finished with exit code 0

使用torchsummary打印每个网络模型的详细信息

from torchsummary import summary
 
if __name__ == '__main__':
    net = ResNeXt50_32x4d().cuda()
    summary(net, (3, 224, 224))

打印模型如下 

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 112, 112]           9,408
       BatchNorm2d-2         [-1, 64, 112, 112]             128
              ReLU-3         [-1, 64, 112, 112]               0
         MaxPool2d-4           [-1, 64, 56, 56]               0
            Conv2d-5          [-1, 256, 56, 56]          16,384
       BatchNorm2d-6          [-1, 256, 56, 56]             512
            Conv2d-7          [-1, 128, 56, 56]           8,192
       BatchNorm2d-8          [-1, 128, 56, 56]             256
              ReLU-9          [-1, 128, 56, 56]               0
           Conv2d-10          [-1, 128, 56, 56]           4,608
      BatchNorm2d-11          [-1, 128, 56, 56]             256
             ReLU-12          [-1, 128, 56, 56]               0
           Conv2d-13          [-1, 256, 56, 56]          32,768
      BatchNorm2d-14          [-1, 256, 56, 56]             512
             ReLU-15          [-1, 256, 56, 56]               0
       Bottleneck-16          [-1, 256, 56, 56]               0
           Conv2d-17          [-1, 128, 56, 56]          32,768
      BatchNorm2d-18          [-1, 128, 56, 56]             256
             ReLU-19          [-1, 128, 56, 56]               0
           Conv2d-20          [-1, 128, 56, 56]           4,608
      BatchNorm2d-21          [-1, 128, 56, 56]             256
             ReLU-22          [-1, 128, 56, 56]               0
           Conv2d-23          [-1, 256, 56, 56]          32,768
      BatchNorm2d-24          [-1, 256, 56, 56]             512
             ReLU-25          [-1, 256, 56, 56]               0
       Bottleneck-26          [-1, 256, 56, 56]               0
           Conv2d-27          [-1, 128, 56, 56]          32,768
      BatchNorm2d-28          [-1, 128, 56, 56]             256
             ReLU-29          [-1, 128, 56, 56]               0
           Conv2d-30          [-1, 128, 56, 56]           4,608
      BatchNorm2d-31          [-1, 128, 56, 56]             256
             ReLU-32          [-1, 128, 56, 56]               0
           Conv2d-33          [-1, 256, 56, 56]          32,768
      BatchNorm2d-34          [-1, 256, 56, 56]             512
             ReLU-35          [-1, 256, 56, 56]               0
       Bottleneck-36          [-1, 256, 56, 56]               0
           Conv2d-37          [-1, 512, 28, 28]         131,072
      BatchNorm2d-38          [-1, 512, 28, 28]           1,024
           Conv2d-39          [-1, 256, 56, 56]          65,536
      BatchNorm2d-40          [-1, 256, 56, 56]             512
             ReLU-41          [-1, 256, 56, 56]               0
           Conv2d-42          [-1, 256, 28, 28]          18,432
      BatchNorm2d-43          [-1, 256, 28, 28]             512
             ReLU-44          [-1, 256, 28, 28]               0
           Conv2d-45          [-1, 512, 28, 28]         131,072
      BatchNorm2d-46          [-1, 512, 28, 28]           1,024
             ReLU-47          [-1, 512, 28, 28]               0
       Bottleneck-48          [-1, 512, 28, 28]               0
           Conv2d-49          [-1, 256, 28, 28]         131,072
      BatchNorm2d-50          [-1, 256, 28, 28]             512
             ReLU-51          [-1, 256, 28, 28]               0
           Conv2d-52          [-1, 256, 28, 28]          18,432
      BatchNorm2d-53          [-1, 256, 28, 28]             512
             ReLU-54          [-1, 256, 28, 28]               0
           Conv2d-55          [-1, 512, 28, 28]         131,072
      BatchNorm2d-56          [-1, 512, 28, 28]           1,024
             ReLU-57          [-1, 512, 28, 28]               0
       Bottleneck-58          [-1, 512, 28, 28]               0
           Conv2d-59          [-1, 256, 28, 28]         131,072
      BatchNorm2d-60          [-1, 256, 28, 28]             512
             ReLU-61          [-1, 256, 28, 28]               0
           Conv2d-62          [-1, 256, 28, 28]          18,432
      BatchNorm2d-63          [-1, 256, 28, 28]             512
             ReLU-64          [-1, 256, 28, 28]               0
           Conv2d-65          [-1, 512, 28, 28]         131,072
      BatchNorm2d-66          [-1, 512, 28, 28]           1,024
             ReLU-67          [-1, 512, 28, 28]               0
       Bottleneck-68          [-1, 512, 28, 28]               0
           Conv2d-69          [-1, 256, 28, 28]         131,072
      BatchNorm2d-70          [-1, 256, 28, 28]             512
             ReLU-71          [-1, 256, 28, 28]               0
           Conv2d-72          [-1, 256, 28, 28]          18,432
      BatchNorm2d-73          [-1, 256, 28, 28]             512
             ReLU-74          [-1, 256, 28, 28]               0
           Conv2d-75          [-1, 512, 28, 28]         131,072
      BatchNorm2d-76          [-1, 512, 28, 28]           1,024
             ReLU-77          [-1, 512, 28, 28]               0
       Bottleneck-78          [-1, 512, 28, 28]               0
           Conv2d-79         [-1, 1024, 14, 14]         524,288
      BatchNorm2d-80         [-1, 1024, 14, 14]           2,048
           Conv2d-81          [-1, 512, 28, 28]         262,144
      BatchNorm2d-82          [-1, 512, 28, 28]           1,024
             ReLU-83          [-1, 512, 28, 28]               0
           Conv2d-84          [-1, 512, 14, 14]          73,728
      BatchNorm2d-85          [-1, 512, 14, 14]           1,024
             ReLU-86          [-1, 512, 14, 14]               0
           Conv2d-87         [-1, 1024, 14, 14]         524,288
      BatchNorm2d-88         [-1, 1024, 14, 14]           2,048
             ReLU-89         [-1, 1024, 14, 14]               0
       Bottleneck-90         [-1, 1024, 14, 14]               0
           Conv2d-91          [-1, 512, 14, 14]         524,288
      BatchNorm2d-92          [-1, 512, 14, 14]           1,024
             ReLU-93          [-1, 512, 14, 14]               0
           Conv2d-94          [-1, 512, 14, 14]          73,728
      BatchNorm2d-95          [-1, 512, 14, 14]           1,024
             ReLU-96          [-1, 512, 14, 14]               0
           Conv2d-97         [-1, 1024, 14, 14]         524,288
      BatchNorm2d-98         [-1, 1024, 14, 14]           2,048
             ReLU-99         [-1, 1024, 14, 14]               0
      Bottleneck-100         [-1, 1024, 14, 14]               0
          Conv2d-101          [-1, 512, 14, 14]         524,288
     BatchNorm2d-102          [-1, 512, 14, 14]           1,024
            ReLU-103          [-1, 512, 14, 14]               0
          Conv2d-104          [-1, 512, 14, 14]          73,728
     BatchNorm2d-105          [-1, 512, 14, 14]           1,024
            ReLU-106          [-1, 512, 14, 14]               0
          Conv2d-107         [-1, 1024, 14, 14]         524,288
     BatchNorm2d-108         [-1, 1024, 14, 14]           2,048
            ReLU-109         [-1, 1024, 14, 14]               0
      Bottleneck-110         [-1, 1024, 14, 14]               0
          Conv2d-111          [-1, 512, 14, 14]         524,288
     BatchNorm2d-112          [-1, 512, 14, 14]           1,024
            ReLU-113          [-1, 512, 14, 14]               0
          Conv2d-114          [-1, 512, 14, 14]          73,728
     BatchNorm2d-115          [-1, 512, 14, 14]           1,024
            ReLU-116          [-1, 512, 14, 14]               0
          Conv2d-117         [-1, 1024, 14, 14]         524,288
     BatchNorm2d-118         [-1, 1024, 14, 14]           2,048
            ReLU-119         [-1, 1024, 14, 14]               0
      Bottleneck-120         [-1, 1024, 14, 14]               0
          Conv2d-121          [-1, 512, 14, 14]         524,288
     BatchNorm2d-122          [-1, 512, 14, 14]           1,024
            ReLU-123          [-1, 512, 14, 14]               0
          Conv2d-124          [-1, 512, 14, 14]          73,728
     BatchNorm2d-125          [-1, 512, 14, 14]           1,024
            ReLU-126          [-1, 512, 14, 14]               0
          Conv2d-127         [-1, 1024, 14, 14]         524,288
     BatchNorm2d-128         [-1, 1024, 14, 14]           2,048
            ReLU-129         [-1, 1024, 14, 14]               0
      Bottleneck-130         [-1, 1024, 14, 14]               0
          Conv2d-131          [-1, 512, 14, 14]         524,288
     BatchNorm2d-132          [-1, 512, 14, 14]           1,024
            ReLU-133          [-1, 512, 14, 14]               0
          Conv2d-134          [-1, 512, 14, 14]          73,728
     BatchNorm2d-135          [-1, 512, 14, 14]           1,024
            ReLU-136          [-1, 512, 14, 14]               0
          Conv2d-137         [-1, 1024, 14, 14]         524,288
     BatchNorm2d-138         [-1, 1024, 14, 14]           2,048
            ReLU-139         [-1, 1024, 14, 14]               0
      Bottleneck-140         [-1, 1024, 14, 14]               0
          Conv2d-141           [-1, 2048, 7, 7]       2,097,152
     BatchNorm2d-142           [-1, 2048, 7, 7]           4,096
          Conv2d-143         [-1, 1024, 14, 14]       1,048,576
     BatchNorm2d-144         [-1, 1024, 14, 14]           2,048
            ReLU-145         [-1, 1024, 14, 14]               0
          Conv2d-146           [-1, 1024, 7, 7]         294,912
     BatchNorm2d-147           [-1, 1024, 7, 7]           2,048
            ReLU-148           [-1, 1024, 7, 7]               0
          Conv2d-149           [-1, 2048, 7, 7]       2,097,152
     BatchNorm2d-150           [-1, 2048, 7, 7]           4,096
            ReLU-151           [-1, 2048, 7, 7]               0
      Bottleneck-152           [-1, 2048, 7, 7]               0
          Conv2d-153           [-1, 1024, 7, 7]       2,097,152
     BatchNorm2d-154           [-1, 1024, 7, 7]           2,048
            ReLU-155           [-1, 1024, 7, 7]               0
          Conv2d-156           [-1, 1024, 7, 7]         294,912
     BatchNorm2d-157           [-1, 1024, 7, 7]           2,048
            ReLU-158           [-1, 1024, 7, 7]               0
          Conv2d-159           [-1, 2048, 7, 7]       2,097,152
     BatchNorm2d-160           [-1, 2048, 7, 7]           4,096
            ReLU-161           [-1, 2048, 7, 7]               0
      Bottleneck-162           [-1, 2048, 7, 7]               0
          Conv2d-163           [-1, 1024, 7, 7]       2,097,152
     BatchNorm2d-164           [-1, 1024, 7, 7]           2,048
            ReLU-165           [-1, 1024, 7, 7]               0
          Conv2d-166           [-1, 1024, 7, 7]         294,912
     BatchNorm2d-167           [-1, 1024, 7, 7]           2,048
            ReLU-168           [-1, 1024, 7, 7]               0
          Conv2d-169           [-1, 2048, 7, 7]       2,097,152
     BatchNorm2d-170           [-1, 2048, 7, 7]           4,096
            ReLU-171           [-1, 2048, 7, 7]               0
      Bottleneck-172           [-1, 2048, 7, 7]               0
AdaptiveAvgPool2d-173           [-1, 2048, 1, 1]               0
          Linear-174                 [-1, 1000]       2,049,000
================================================================
Total params: 25,028,904
Trainable params: 25,028,904
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 361.78
Params size (MB): 95.48
Estimated Total Size (MB): 457.83
----------------------------------------------------------------
 
Process finished with exit code 0
(5)完整代码
import torch
import torch.nn as nn
import torch.nn.functional as F
 
'''-------------一、BasicBlock模块-----------------------------'''
# 用于ResNet18和ResNet34基本残差结构块
class BasicBlock(nn.Module):
    def __init__(self, in_channel, out_channel, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.left = nn.Sequential(
            nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.ReLU(),
            nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.downsample(downsample)
        )
 
    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)
 
        out = self.left(x)  # 这是由于残差块需要保留原始输入
        out += identity  # 这是ResNet的核心,在输出上叠加了输入x
        out = F.relu(out)
        return out
 
'''-------------二、Bottleneck模块-----------------------------'''
class Bottleneck(nn.Module):
 
    expansion = 4
 
    # 这里相对于RseNet,在代码中增加一下两个参数groups和width_per_group(即为group数和conv2中组卷积每个group的卷积核个数)
    # 默认值就是正常的ResNet
    def __init__(self, in_channel, out_channel, stride=1, downsample=None,
                 groups=1, width_per_group=64):
        super(Bottleneck, self).__init__()
        # 这里也可以自动计算中间的通道数,也就是3x3卷积后的通道数,如果不改变就是out_channels
        # 如果groups=32,with_per_group=4,out_channels就翻倍了
        width = int(out_channel * (width_per_group / 64.)) * groups
 
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
                               kernel_size=1, stride=1, bias=False)
        self.bn1 = nn.BatchNorm2d(width)
        # -----------------------------------------
        # 组卷积的数,需要传入参数
        self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(width)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion,
                               kernel_size=1, stride=1, bias=False)
        self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
        # -----------------------------------------
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample
 
    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)
 
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
 
        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)
 
        out = self.conv3(out)
        out = self.bn3(out)
 
        out += identity  # 残差连接
        out = self.relu(out)
 
        return out
 
'''-------------三、搭建ResNeXt结构-----------------------------'''
class ResNeXt(nn.Module):
    def __init__(self,
                 block,  # 表示block的类型
                 blocks_num,  # 表示的是每一层block的个数
                 num_classes=1000,  # 表示类别
                 include_top=True,  # 表示是否含有分类层(可做迁移学习)
                 groups=1,  # 表示组卷积的数
                 width_per_group=64):
        super(ResNeXt, self).__init__()
        self.include_top = include_top
        self.in_channel = 64
 
        self.groups = groups
        self.width_per_group = width_per_group
 
        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                               padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])           # 64 -> 128
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)# 128 -> 256
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)# 256 -> 512
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2) # 512 -> 1024
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)
 
 
 
    # 形成单个Stage的网络结构
    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))
        # 该部分是将每个blocks的第一个残差结构保存在layers列表中。
        layers = []
        layers.append(block(self.in_channel,
                            channel,
                            downsample=downsample,
                            stride=stride,
                            groups=self.groups,
                            width_per_group=self.width_per_group))
        self.in_channel = channel * block.expansion  # 得到最后的输出
 
        # 该部分是将每个blocks的剩下残差结构保存在layers列表中,这样就完成了一个blocks的构造。
        for _ in range(1, block_num):
            layers.append(block(self.in_channel,
                                channel,
                                groups=self.groups,
                                width_per_group=self.width_per_group))
 
         # 返回Conv Block和Identity Block的集合,形成一个Stage的网络结构
        return nn.Sequential(*layers)
 
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
 
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
 
        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)
 
        return x
 
 
def ResNet34(num_classes=1000, include_top=True):
 
    return ResNeXt(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
 
 
def ResNet50(num_classes=1000, include_top=True):
 
    return ResNeXt(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
 
 
def ResNet101(num_classes=1000, include_top=True):
 
    return ResNeXt(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)
 
 
# 论文中的ResNeXt50_32x4d
def ResNeXt50_32x4d(num_classes=1000, include_top=True):
 
    groups = 32
    width_per_group = 4
    return ResNeXt(Bottleneck, [3, 4, 6, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)
 
 
def ResNeXt101_32x8d(num_classes=1000, include_top=True):
 
    groups = 32
    width_per_group = 8
    return ResNeXt(Bottleneck, [3, 4, 23, 3],
                  num_classes=num_classes,
                  include_top=include_top,
                  groups=groups,
                  width_per_group=width_per_group)
 
'''
if __name__ == '__main__':
    model = ResNeXt50_32x4d()
    print(model)
    input = torch.randn(1, 3, 224, 224)
    out = model(input)
    print(out.shape)
# test()
'''
from torchsummary import summary
 
if __name__ == '__main__':
    net = ResNeXt50_32x4d().cuda()
    summary(net, (3, 224, 224))

参考:

ResNeXt代码复现+超详细注释(PyTorch)

经典神经网络论文超详细解读(八)——ResNeXt学习笔记(翻译+精读+代码复现)

  • 3
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值