365天深度学习训练营-第J9周:Inception v3算法实战与解析

一、课题背景和开发环境

📌第J9周:Inception v3算法实战与解析📌

  • 语言:Python3、Pytorch

📌本周任务:📌

  1. 了解并学习InceptionV3相对于InceptionV1改进了哪些地方(重点)
  2. 使用Inception v3完成天气识别案例

二、理论基础

Inception v3 论文
Rethinking the Inception Architecture for Computer Vision.pdf

Inception v3由谷歌研究员Christian Szegedy等人在2015年的论文《Rethinking the Inception Architecture for Computer Vision》中提出。Inception v3是Inception网络系列的第三个版本,它在ImageNet图像识别竞赛中取得了优异成绩,尤其是在大规模图像识别任务中表现出色。

Inception v3的主要特点如下:

  1. 更深的网络结构:Inception v3比之前的Inception网络结构更深,包含了48层卷积层。这使得网络可以提取更多层次的特征,从而在图像识别任务上取得更好的效果。
  2. 使用Factorized Convolutions:Inception v3采用了Factorized Convolutions(分解卷积),将较大的卷积核分解为多个较小的卷积核。这种方法可以降低网络的参数数量,减少计算复杂度,同时保持良好的性能。
  3. 使用Batch Normalization:Inception v3在每个卷积层之后都添加了Batch Normalization(BN),这有助于网络的收敛和泛化能力。BN可以减少Internal Covariate Shift(内部协变量偏移)现象,加快训练速度,同时提高模型的鲁棒性。
  4. 辅助分类器:Inception v3引入了辅助分类器,可以在网络训练过程中提供额外的梯度信息,帮助网络更好地学习特征。辅助分类器位于网络的某个中间层,其输出会与主分类器的输出进行加权融合,从而得到最终的预测结果。
  5. 基于RMSProp的优化器:Inception v3使用了RMSProp优化器进行训练。相比于传统的随机梯度下降(SGD)方法,RMSProp可以自适应地调整学习率,使得训练过程更加稳定,收敛速度更快。

Inception v3在图像分类、物体检测和图像分割等计算机视觉任务中均取得了显著的效果。然而,由于其较大的网络结构和计算复杂度,Inception v3在实际应用中可能需要较高的硬件要求。
Inception module
相对于Inception v1的Inception Module结构(如上图所示),Inception v3中做出了如下改动:

  • 将 5×5 的卷积分解为两个 3×3 的卷积运算以提升计算速度。尽管这有点违反直觉,但一个 5×5 的卷积在计算成本上是一个 3×3 卷积的 2.78 倍。所以叠加两个 3×3 卷积实际上在性能上会有所提升,如下图所示:
    Inception modules

  • 此外,作者将 n×n 的卷积核尺寸分解为 1×n 和 n×1 两个卷积。例如,一个 3×3 的卷积等价于首先执行一个 1×3 的卷积再执行一个 3×1 的卷积。他们还发现这种方法在成本上要比单个 3×3 的卷积降低 33%,这一结构如下图所示:
    Inception modules after the factorization
    此处如果 n=3,则与上一张图像一致。最左侧的 5x5 卷积可被表示为两个 3x3 卷积,它们又可以被表示为 1x3 和 3x1 卷积。

  • 模块中的滤波器组被扩展(即变得更宽而不是更深),以解决表征性瓶颈。如果该模块没有被拓展宽度,而是变得更深,那么维度会过多减少,造成信息损失。如下图所示:
    Inception modules with expanded the filter bank outputs

  • 最后实现的inception v3网络是上图结构图如下:
    Inception V3

三、搭建网络模型

0.BasicConv2d

class BasicConv2d(nn.Module):
    def __init__(self, in_channel, out_channel, **kwargs):
        super(BasicConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channel, out_channel, bias=False, **kwargs)
        self.norm = nn.BatchNorm2d(out_channel, eps=0.001)
        self.relu = nn.ReLU(inplace=True)
        
    def forward(self, x):
        x = self.conv(x)
        x = self.norm(x)
        x = self.relu(x)
        return x

1.Inception-A

class InceptionA(nn.Module):
    def __init__(self, in_channel, pool_feature):
        super(InceptionA, self).__init__()
        # 1x1 conv
        self.conv1 = BasicConv2d(in_channel, 64, kernel_size=1)
        # 1x1 conv -> 5x5 conv
        self.conv2 = nn.Sequential(
            BasicConv2d(in_channel, 48, kernel_size=1),
            BasicConv2d(48, 64, kernel_size=5, padding=2)
        )
        # 1x1 conv -> [3x3 conv]x2
        self.conv3 = nn.Sequential(
            BasicConv2d(in_channel, 64, kernel_size=1),
            BasicConv2d(64, 96, kernel_size=3, padding=1),
            BasicConv2d(96, 96, kernel_size=3, padding=1)
        )
        # 3x3 avgpool -> 1x1 conv
        self.conv4 = nn.Sequential(
            nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
            BasicConv2d(in_channel, pool_feature, kernel_size=1)
        )
        
    def forward(self, x):
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x3 = self.conv3(x)
        x4 = self.conv4(x)
        x = torch.cat([x1,x2,x3,x4], 1)
        return x

2.Inception-B

class InceptionB(nn.Module):
    def __init__(self, in_channel, filter7x7):
        super(InceptionB, self).__init__()
        # 1x1 conv
        self.conv1 = BasicConv2d(in_channel, 192, kernel_size=1)
        # 1x1 conv -> 1x7 conv -> 7x1 conv
        self.conv2 = nn.Sequential(
            BasicConv2d(in_channel, filter7x7, kernel_size=1),
            BasicConv2d(filter7x7, filter7x7, kernel_size=(1,7), padding=(0,3)),
            BasicConv2d(filter7x7, 192, kernel_size=(7,1), padding=(3,0))
        )
        # 1x1 conv -> [7x1 conv -> 1x7 conv]x2
        self.conv3 = nn.Sequential(
            BasicConv2d(in_channel, filter7x764, kernel_size=1),
            BasicConv2d(filter7x7, filter7x7, kernel_size=(7,1), padding=(3,0)),
            BasicConv2d(filter7x7, filter7x7, kernel_size=(1,7), padding=(0,3)),
            BasicConv2d(filter7x7, filter7x7, kernel_size=(7,1), padding=(3,0)),
            BasicConv2d(filter7x7, 192, kernel_size=(1,7), padding=(0,3))
        )
        # 3x3 avgpool -> 1x1 conv
        self.conv4 = nn.Sequential(
            nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
            BasicConv2d(in_channel, 192, kernel_size=1)
        )
        
    def forward(self, x):
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x3 = self.conv3(x)
        x4 = self.conv4(x)
        x = torch.cat([x1,x2,x3,x4], 1)
        return x

3.Inception-C

class InceptionC(nn.Module):
    def __init__(self, in_channel):
        super(InceptionC, self).__init__()
        # 1x1 conv
        self.conv1 = BasicConv2d(in_channel, 320, kernel_size=1)
        # 1x1 conv -> [1x3 conv, 3x1 conv]
        self.conv2 = BasicConv2d(in_channel, 384, kernel_size=1)
        self.conv2a = BasicConv2d(384, 384, kernel_size=(1,3), padding=(0,1))
        self.conv2b = BasicConv2d(384, 384, kernel_size=(3,1), padding=(1,0))
        # 1x1 conv -> [3x3 conv]x2
        self.conv3 = nn.Sequential(
            BasicConv2d(in_channel, 448, kernel_size=1),
            BasicConv2d(448, 384, kernel_size=3, padding=1)
        )
        self.conv3a = BasicConv2d(384, 384, kernel_size=(1,3), padding=(0,1))
        self.conv3b = BasicConv2d(384, 384, kernel_size=(3,1), padding=(1,0))
        # 3x3 avgpool -> 1x1 conv
        self.conv4 = nn.Sequential(
            nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
            BasicConv2d(in_channel, 192, kernel_size=1)
        )
        
    def forward(self, x):
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x2 = torch.cat([self.conv2a(x2),self.conv2b(x2)], 1)
        x3 = self.conv3(x)
        x3 = torch.cat([self.conv2a(x3),self.conv2b(x3)], 1)
        x4 = self.conv4(x)
        x = torch.cat([x1,x2,x3,x4], 1)
        return x

4.Reduction-A

class ReductionA(nn.Module):
    def __init__(self, in_channel):
        super(ReductionA, self).__init__()
        # 3x3 conv
        self.conv1 = BasicConv2d(in_channel, 384, kernel_size=3, stride=2)
        # 1x1 conv -> [3x3 conv]x2
        self.conv2 = nn.Sequential(
            BasicConv2d(in_channel, 64, kernel_size=1),
            BasicConv2d(64, 96, kernel_size=3, padding=1),
            BasicConv2d(96, 96, kernel_size=3, stride=2)
        )
        # 3x3 maxpool
        self.conv3 = nn.MaxPool2d(kernel_size=3, stride=2)
    
    def forward(self, x):
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x3 = self.conv3(x)
        x = torch.cat([x1,x2,x3], 1)
        return x

5.Reduction-B

class ReductionB(nn.Module):
    def __init__(self, in_channel):
        super(ReductionB, self).__init__()
        # 1x1 conv -> 3x3 conv
        self.conv1 = nn.Sequential(
            BasicConv2d(in_channel, 192, kernel_size=1),
            BasicConv2d(192, 320, kernel_size=3, stride=2)
        )
        # 1x1 conv -> 1x7 conv -> 7x1 conv -> 3x3 conv
        self.conv2 = nn.Sequential(
            BasicConv2d(in_channel, 192, kernel_size=1),
            BasicConv2d(192, 192, kernel_size=(1,7), padding=(0,3)),
            BasicConv2d(192, 192, kernel_size=(7,1), padding=(3,0)),
            BasicConv2d(192, 192, kernel_size=3, stride=2)
        )
        # 3x3 maxpool
        self.conv3 = nn.MaxPool2d(kernel_size=3, stride=2)
    
    def forward(self, x):
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x3 = self.conv3(x)
        x = torch.cat([x1,x2,x3], 1)
        return x

6.辅助分支

class InceptionAux(nn.Module):
    def __init__(self, in_channel, num_classes):
        super(InceptionAux, self).__init__()
        self.pool0 = nn.AvgPool2d(kernel_size=5, stride=3)
        self.conv1 = BasicConv2d(in_channel, 128, kernel_size=1)
        self.conv2 = BasicConv2d(128, 768, kernel_size=5)
        self.conv2.stddev = 0.01
        self.fc = nn.Linear(768, num_classes)
        self.fc.stddev = 0.001
    
    def forward(self, x):
        x = self.pool0(x)
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x

7.模型搭建

''' InceptionV3 '''
class InceptionV3(nn.Module):
    def __init__(self,
                 input_shape=[3, 224, 224],
                 classes=1000,  # 用于分类图像的可选类数
                 aux_logits=False,
                 transform_input=False):
        super(InceptionV3, self).__init__()
        self.aux_logits = aux_logits
        self.transform_input = transform_input
        self.stem = nn.Sequential(
            BasicConv2d(3, 32, kernel_size=3, stride=2),
            BasicConv2d(32, 32, kernel_size=3),
            BasicConv2d(32, 64, kernel_size=3, padding=1),
            nn.MaxPool2d(kernel_size=3, stride=2),
            BasicConv2d(64, 80, kernel_size=1),
            BasicConv2d(80, 192, kernel_size=3),
            nn.MaxPool2d(kernel_size=3, stride=2)
        )
        self.block1 = nn.Sequential(
            InceptionA(192, pool_feature=32),
            InceptionA(256, pool_feature=64),
            InceptionA(288, pool_feature=64)
        )
        self.block2 = nn.Sequential(
            ReductionA(288),
            InceptionB(768, filter7x7=128),
            InceptionB(768, filter7x7=160),
            InceptionB(768, filter7x7=160),
            InceptionB(768, filter7x7=192)
        )
        if self.aux_logits:
            self.AuxLogits = InceptionAux(768, classes)
        self.block3 = nn.Sequential(
            ReductionB(768),
            InceptionC(1280),
            InceptionC(2048)
        )
        self.fc = nn.Linear(2048, classes)
    
    def forward(self, x):
        if self.transform_input:
            x = x.clone()
            x[:, 0] = x[:, 0] * (0.229 / 0.5) + (0.485 - 0.5) / 0.5
            x[:, 1] = x[:, 1] * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
            x[:, 2] = x[:, 2] * (0.225 / 0.5) + (0.406 - 0.5) / 0.5
        x = self.stem(x)
        x = self.block1(x)
        x = self.block2(x)
        if self.training and self.aux_logits:
            aux = self.AuxLogits(x)
        x = self.block3(x)
        x = nn.AvgPool2d(kernel_size=8)(x)
        x = F.dropout(x, training=self.training)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        if self.training and self.aux_logits:
            return x, aux
        return x

8.查看模型

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 32, 149, 149]             864
       BatchNorm2d-2         [-1, 32, 149, 149]              64
              ReLU-3         [-1, 32, 149, 149]               0
       BasicConv2d-4         [-1, 32, 149, 149]               0
            Conv2d-5         [-1, 32, 147, 147]           9,216
       BatchNorm2d-6         [-1, 32, 147, 147]              64
              ReLU-7         [-1, 32, 147, 147]               0
       BasicConv2d-8         [-1, 32, 147, 147]               0
            Conv2d-9         [-1, 64, 147, 147]          18,432
      BatchNorm2d-10         [-1, 64, 147, 147]             128
             ReLU-11         [-1, 64, 147, 147]               0
      BasicConv2d-12         [-1, 64, 147, 147]               0
        MaxPool2d-13           [-1, 64, 73, 73]               0
           Conv2d-14           [-1, 80, 73, 73]           5,120
      BatchNorm2d-15           [-1, 80, 73, 73]             160
             ReLU-16           [-1, 80, 73, 73]               0
      BasicConv2d-17           [-1, 80, 73, 73]               0
           Conv2d-18          [-1, 192, 71, 71]         138,240
      BatchNorm2d-19          [-1, 192, 71, 71]             384
             ReLU-20          [-1, 192, 71, 71]               0
      BasicConv2d-21          [-1, 192, 71, 71]               0
        MaxPool2d-22          [-1, 192, 35, 35]               0
           Conv2d-23           [-1, 64, 35, 35]          12,288
      BatchNorm2d-24           [-1, 64, 35, 35]             128
             ReLU-25           [-1, 64, 35, 35]               0
      BasicConv2d-26           [-1, 64, 35, 35]               0
           Conv2d-27           [-1, 48, 35, 35]           9,216
      BatchNorm2d-28           [-1, 48, 35, 35]              96
             ReLU-29           [-1, 48, 35, 35]               0
      BasicConv2d-30           [-1, 48, 35, 35]               0
           Conv2d-31           [-1, 64, 35, 35]          76,800
      BatchNorm2d-32           [-1, 64, 35, 35]             128
             ReLU-33           [-1, 64, 35, 35]               0
      BasicConv2d-34           [-1, 64, 35, 35]               0
           Conv2d-35           [-1, 64, 35, 35]          12,288
      BatchNorm2d-36           [-1, 64, 35, 35]             128
             ReLU-37           [-1, 64, 35, 35]               0
      BasicConv2d-38           [-1, 64, 35, 35]               0
           Conv2d-39           [-1, 96, 35, 35]          55,296
      BatchNorm2d-40           [-1, 96, 35, 35]             192
             ReLU-41           [-1, 96, 35, 35]               0
      BasicConv2d-42           [-1, 96, 35, 35]               0
           Conv2d-43           [-1, 96, 35, 35]          82,944
      BatchNorm2d-44           [-1, 96, 35, 35]             192
             ReLU-45           [-1, 96, 35, 35]               0
      BasicConv2d-46           [-1, 96, 35, 35]               0
        AvgPool2d-47          [-1, 192, 35, 35]               0
           Conv2d-48           [-1, 32, 35, 35]           6,144
      BatchNorm2d-49           [-1, 32, 35, 35]              64
             ReLU-50           [-1, 32, 35, 35]               0
      BasicConv2d-51           [-1, 32, 35, 35]               0
       InceptionA-52          [-1, 256, 35, 35]               0
           Conv2d-53           [-1, 64, 35, 35]          16,384
      BatchNorm2d-54           [-1, 64, 35, 35]             128
             ReLU-55           [-1, 64, 35, 35]               0
      BasicConv2d-56           [-1, 64, 35, 35]               0
           Conv2d-57           [-1, 48, 35, 35]          12,288
      BatchNorm2d-58           [-1, 48, 35, 35]              96
             ReLU-59           [-1, 48, 35, 35]               0
      BasicConv2d-60           [-1, 48, 35, 35]               0
           Conv2d-61           [-1, 64, 35, 35]          76,800
      BatchNorm2d-62           [-1, 64, 35, 35]             128
             ReLU-63           [-1, 64, 35, 35]               0
      BasicConv2d-64           [-1, 64, 35, 35]               0
           Conv2d-65           [-1, 64, 35, 35]          16,384
      BatchNorm2d-66           [-1, 64, 35, 35]             128
             ReLU-67           [-1, 64, 35, 35]               0
      BasicConv2d-68           [-1, 64, 35, 35]               0
           Conv2d-69           [-1, 96, 35, 35]          55,296
      BatchNorm2d-70           [-1, 96, 35, 35]             192
             ReLU-71           [-1, 96, 35, 35]               0
      BasicConv2d-72           [-1, 96, 35, 35]               0
           Conv2d-73           [-1, 96, 35, 35]          82,944
      BatchNorm2d-74           [-1, 96, 35, 35]             192
             ReLU-75           [-1, 96, 35, 35]               0
      BasicConv2d-76           [-1, 96, 35, 35]               0
        AvgPool2d-77          [-1, 256, 35, 35]               0
           Conv2d-78           [-1, 64, 35, 35]          16,384
      BatchNorm2d-79           [-1, 64, 35, 35]             128
             ReLU-80           [-1, 64, 35, 35]               0
      BasicConv2d-81           [-1, 64, 35, 35]               0
       InceptionA-82          [-1, 288, 35, 35]               0
           Conv2d-83           [-1, 64, 35, 35]          18,432
      BatchNorm2d-84           [-1, 64, 35, 35]             128
             ReLU-85           [-1, 64, 35, 35]               0
      BasicConv2d-86           [-1, 64, 35, 35]               0
           Conv2d-87           [-1, 48, 35, 35]          13,824
      BatchNorm2d-88           [-1, 48, 35, 35]              96
             ReLU-89           [-1, 48, 35, 35]               0
      BasicConv2d-90           [-1, 48, 35, 35]               0
           Conv2d-91           [-1, 64, 35, 35]          76,800
      BatchNorm2d-92           [-1, 64, 35, 35]             128
             ReLU-93           [-1, 64, 35, 35]               0
      BasicConv2d-94           [-1, 64, 35, 35]               0
           Conv2d-95           [-1, 64, 35, 35]          18,432
      BatchNorm2d-96           [-1, 64, 35, 35]             128
             ReLU-97           [-1, 64, 35, 35]               0
      BasicConv2d-98           [-1, 64, 35, 35]               0
           Conv2d-99           [-1, 96, 35, 35]          55,296
     BatchNorm2d-100           [-1, 96, 35, 35]             192
            ReLU-101           [-1, 96, 35, 35]               0
     BasicConv2d-102           [-1, 96, 35, 35]               0
          Conv2d-103           [-1, 96, 35, 35]          82,944
     BatchNorm2d-104           [-1, 96, 35, 35]             192
            ReLU-105           [-1, 96, 35, 35]               0
     BasicConv2d-106           [-1, 96, 35, 35]               0
       AvgPool2d-107          [-1, 288, 35, 35]               0
          Conv2d-108           [-1, 64, 35, 35]          18,432
     BatchNorm2d-109           [-1, 64, 35, 35]             128
            ReLU-110           [-1, 64, 35, 35]               0
     BasicConv2d-111           [-1, 64, 35, 35]               0
      InceptionA-112          [-1, 288, 35, 35]               0
          Conv2d-113          [-1, 384, 17, 17]         995,328
     BatchNorm2d-114          [-1, 384, 17, 17]             768
            ReLU-115          [-1, 384, 17, 17]               0
     BasicConv2d-116          [-1, 384, 17, 17]               0
          Conv2d-117           [-1, 64, 35, 35]          18,432
     BatchNorm2d-118           [-1, 64, 35, 35]             128
            ReLU-119           [-1, 64, 35, 35]               0
     BasicConv2d-120           [-1, 64, 35, 35]               0
          Conv2d-121           [-1, 96, 35, 35]          55,296
     BatchNorm2d-122           [-1, 96, 35, 35]             192
            ReLU-123           [-1, 96, 35, 35]               0
     BasicConv2d-124           [-1, 96, 35, 35]               0
          Conv2d-125           [-1, 96, 17, 17]          82,944
     BatchNorm2d-126           [-1, 96, 17, 17]             192
            ReLU-127           [-1, 96, 17, 17]               0
     BasicConv2d-128           [-1, 96, 17, 17]               0
       MaxPool2d-129          [-1, 288, 17, 17]               0
      ReductionA-130          [-1, 768, 17, 17]               0
          Conv2d-131          [-1, 192, 17, 17]         147,456
     BatchNorm2d-132          [-1, 192, 17, 17]             384
            ReLU-133          [-1, 192, 17, 17]               0
     BasicConv2d-134          [-1, 192, 17, 17]               0
          Conv2d-135          [-1, 128, 17, 17]          98,304
     BatchNorm2d-136          [-1, 128, 17, 17]             256
            ReLU-137          [-1, 128, 17, 17]               0
     BasicConv2d-138          [-1, 128, 17, 17]               0
          Conv2d-139          [-1, 128, 17, 17]         114,688
     BatchNorm2d-140          [-1, 128, 17, 17]             256
            ReLU-141          [-1, 128, 17, 17]               0
     BasicConv2d-142          [-1, 128, 17, 17]               0
          Conv2d-143          [-1, 192, 17, 17]         172,032
     BatchNorm2d-144          [-1, 192, 17, 17]             384
            ReLU-145          [-1, 192, 17, 17]               0
     BasicConv2d-146          [-1, 192, 17, 17]               0
          Conv2d-147          [-1, 128, 17, 17]          98,304
     BatchNorm2d-148          [-1, 128, 17, 17]             256
            ReLU-149          [-1, 128, 17, 17]               0
     BasicConv2d-150          [-1, 128, 17, 17]               0
          Conv2d-151          [-1, 128, 17, 17]         114,688
     BatchNorm2d-152          [-1, 128, 17, 17]             256
            ReLU-153          [-1, 128, 17, 17]               0
     BasicConv2d-154          [-1, 128, 17, 17]               0
          Conv2d-155          [-1, 128, 17, 17]         114,688
     BatchNorm2d-156          [-1, 128, 17, 17]             256
            ReLU-157          [-1, 128, 17, 17]               0
     BasicConv2d-158          [-1, 128, 17, 17]               0
          Conv2d-159          [-1, 128, 17, 17]         114,688
     BatchNorm2d-160          [-1, 128, 17, 17]             256
            ReLU-161          [-1, 128, 17, 17]               0
     BasicConv2d-162          [-1, 128, 17, 17]               0
          Conv2d-163          [-1, 192, 17, 17]         172,032
     BatchNorm2d-164          [-1, 192, 17, 17]             384
            ReLU-165          [-1, 192, 17, 17]               0
     BasicConv2d-166          [-1, 192, 17, 17]               0
       AvgPool2d-167          [-1, 768, 17, 17]               0
          Conv2d-168          [-1, 192, 17, 17]         147,456
     BatchNorm2d-169          [-1, 192, 17, 17]             384
            ReLU-170          [-1, 192, 17, 17]               0
     BasicConv2d-171          [-1, 192, 17, 17]               0
      InceptionB-172          [-1, 768, 17, 17]               0
          Conv2d-173          [-1, 192, 17, 17]         147,456
     BatchNorm2d-174          [-1, 192, 17, 17]             384
            ReLU-175          [-1, 192, 17, 17]               0
     BasicConv2d-176          [-1, 192, 17, 17]               0
          Conv2d-177          [-1, 160, 17, 17]         122,880
     BatchNorm2d-178          [-1, 160, 17, 17]             320
            ReLU-179          [-1, 160, 17, 17]               0
     BasicConv2d-180          [-1, 160, 17, 17]               0
          Conv2d-181          [-1, 160, 17, 17]         179,200
     BatchNorm2d-182          [-1, 160, 17, 17]             320
            ReLU-183          [-1, 160, 17, 17]               0
     BasicConv2d-184          [-1, 160, 17, 17]               0
          Conv2d-185          [-1, 192, 17, 17]         215,040
     BatchNorm2d-186          [-1, 192, 17, 17]             384
            ReLU-187          [-1, 192, 17, 17]               0
     BasicConv2d-188          [-1, 192, 17, 17]               0
          Conv2d-189          [-1, 160, 17, 17]         122,880
     BatchNorm2d-190          [-1, 160, 17, 17]             320
            ReLU-191          [-1, 160, 17, 17]               0
     BasicConv2d-192          [-1, 160, 17, 17]               0
          Conv2d-193          [-1, 160, 17, 17]         179,200
     BatchNorm2d-194          [-1, 160, 17, 17]             320
            ReLU-195          [-1, 160, 17, 17]               0
     BasicConv2d-196          [-1, 160, 17, 17]               0
          Conv2d-197          [-1, 160, 17, 17]         179,200
     BatchNorm2d-198          [-1, 160, 17, 17]             320
            ReLU-199          [-1, 160, 17, 17]               0
     BasicConv2d-200          [-1, 160, 17, 17]               0
          Conv2d-201          [-1, 160, 17, 17]         179,200
     BatchNorm2d-202          [-1, 160, 17, 17]             320
            ReLU-203          [-1, 160, 17, 17]               0
     BasicConv2d-204          [-1, 160, 17, 17]               0
          Conv2d-205          [-1, 192, 17, 17]         215,040
     BatchNorm2d-206          [-1, 192, 17, 17]             384
            ReLU-207          [-1, 192, 17, 17]               0
     BasicConv2d-208          [-1, 192, 17, 17]               0
       AvgPool2d-209          [-1, 768, 17, 17]               0
          Conv2d-210          [-1, 192, 17, 17]         147,456
     BatchNorm2d-211          [-1, 192, 17, 17]             384
            ReLU-212          [-1, 192, 17, 17]               0
     BasicConv2d-213          [-1, 192, 17, 17]               0
      InceptionB-214          [-1, 768, 17, 17]               0
          Conv2d-215          [-1, 192, 17, 17]         147,456
     BatchNorm2d-216          [-1, 192, 17, 17]             384
            ReLU-217          [-1, 192, 17, 17]               0
     BasicConv2d-218          [-1, 192, 17, 17]               0
          Conv2d-219          [-1, 160, 17, 17]         122,880
     BatchNorm2d-220          [-1, 160, 17, 17]             320
            ReLU-221          [-1, 160, 17, 17]               0
     BasicConv2d-222          [-1, 160, 17, 17]               0
          Conv2d-223          [-1, 160, 17, 17]         179,200
     BatchNorm2d-224          [-1, 160, 17, 17]             320
            ReLU-225          [-1, 160, 17, 17]               0
     BasicConv2d-226          [-1, 160, 17, 17]               0
          Conv2d-227          [-1, 192, 17, 17]         215,040
     BatchNorm2d-228          [-1, 192, 17, 17]             384
            ReLU-229          [-1, 192, 17, 17]               0
     BasicConv2d-230          [-1, 192, 17, 17]               0
          Conv2d-231          [-1, 160, 17, 17]         122,880
     BatchNorm2d-232          [-1, 160, 17, 17]             320
            ReLU-233          [-1, 160, 17, 17]               0
     BasicConv2d-234          [-1, 160, 17, 17]               0
          Conv2d-235          [-1, 160, 17, 17]         179,200
     BatchNorm2d-236          [-1, 160, 17, 17]             320
            ReLU-237          [-1, 160, 17, 17]               0
     BasicConv2d-238          [-1, 160, 17, 17]               0
          Conv2d-239          [-1, 160, 17, 17]         179,200
     BatchNorm2d-240          [-1, 160, 17, 17]             320
            ReLU-241          [-1, 160, 17, 17]               0
     BasicConv2d-242          [-1, 160, 17, 17]               0
          Conv2d-243          [-1, 160, 17, 17]         179,200
     BatchNorm2d-244          [-1, 160, 17, 17]             320
            ReLU-245          [-1, 160, 17, 17]               0
     BasicConv2d-246          [-1, 160, 17, 17]               0
          Conv2d-247          [-1, 192, 17, 17]         215,040
     BatchNorm2d-248          [-1, 192, 17, 17]             384
            ReLU-249          [-1, 192, 17, 17]               0
     BasicConv2d-250          [-1, 192, 17, 17]               0
       AvgPool2d-251          [-1, 768, 17, 17]               0
          Conv2d-252          [-1, 192, 17, 17]         147,456
     BatchNorm2d-253          [-1, 192, 17, 17]             384
            ReLU-254          [-1, 192, 17, 17]               0
     BasicConv2d-255          [-1, 192, 17, 17]               0
      InceptionB-256          [-1, 768, 17, 17]               0
          Conv2d-257          [-1, 192, 17, 17]         147,456
     BatchNorm2d-258          [-1, 192, 17, 17]             384
            ReLU-259          [-1, 192, 17, 17]               0
     BasicConv2d-260          [-1, 192, 17, 17]               0
          Conv2d-261          [-1, 192, 17, 17]         147,456
     BatchNorm2d-262          [-1, 192, 17, 17]             384
            ReLU-263          [-1, 192, 17, 17]               0
     BasicConv2d-264          [-1, 192, 17, 17]               0
          Conv2d-265          [-1, 192, 17, 17]         258,048
     BatchNorm2d-266          [-1, 192, 17, 17]             384
            ReLU-267          [-1, 192, 17, 17]               0
     BasicConv2d-268          [-1, 192, 17, 17]               0
          Conv2d-269          [-1, 192, 17, 17]         258,048
     BatchNorm2d-270          [-1, 192, 17, 17]             384
            ReLU-271          [-1, 192, 17, 17]               0
     BasicConv2d-272          [-1, 192, 17, 17]               0
          Conv2d-273          [-1, 192, 17, 17]         147,456
     BatchNorm2d-274          [-1, 192, 17, 17]             384
            ReLU-275          [-1, 192, 17, 17]               0
     BasicConv2d-276          [-1, 192, 17, 17]               0
          Conv2d-277          [-1, 192, 17, 17]         258,048
     BatchNorm2d-278          [-1, 192, 17, 17]             384
            ReLU-279          [-1, 192, 17, 17]               0
     BasicConv2d-280          [-1, 192, 17, 17]               0
          Conv2d-281          [-1, 192, 17, 17]         258,048
     BatchNorm2d-282          [-1, 192, 17, 17]             384
            ReLU-283          [-1, 192, 17, 17]               0
     BasicConv2d-284          [-1, 192, 17, 17]               0
          Conv2d-285          [-1, 192, 17, 17]         258,048
     BatchNorm2d-286          [-1, 192, 17, 17]             384
            ReLU-287          [-1, 192, 17, 17]               0
     BasicConv2d-288          [-1, 192, 17, 17]               0
          Conv2d-289          [-1, 192, 17, 17]         258,048
     BatchNorm2d-290          [-1, 192, 17, 17]             384
            ReLU-291          [-1, 192, 17, 17]               0
     BasicConv2d-292          [-1, 192, 17, 17]               0
       AvgPool2d-293          [-1, 768, 17, 17]               0
          Conv2d-294          [-1, 192, 17, 17]         147,456
     BatchNorm2d-295          [-1, 192, 17, 17]             384
            ReLU-296          [-1, 192, 17, 17]               0
     BasicConv2d-297          [-1, 192, 17, 17]               0
      InceptionB-298          [-1, 768, 17, 17]               0
          Conv2d-299          [-1, 192, 17, 17]         147,456
     BatchNorm2d-300          [-1, 192, 17, 17]             384
            ReLU-301          [-1, 192, 17, 17]               0
     BasicConv2d-302          [-1, 192, 17, 17]               0
          Conv2d-303            [-1, 320, 8, 8]         552,960
     BatchNorm2d-304            [-1, 320, 8, 8]             640
            ReLU-305            [-1, 320, 8, 8]               0
     BasicConv2d-306            [-1, 320, 8, 8]               0
          Conv2d-307          [-1, 192, 17, 17]         147,456
     BatchNorm2d-308          [-1, 192, 17, 17]             384
            ReLU-309          [-1, 192, 17, 17]               0
     BasicConv2d-310          [-1, 192, 17, 17]               0
          Conv2d-311          [-1, 192, 17, 17]         258,048
     BatchNorm2d-312          [-1, 192, 17, 17]             384
            ReLU-313          [-1, 192, 17, 17]               0
     BasicConv2d-314          [-1, 192, 17, 17]               0
          Conv2d-315          [-1, 192, 17, 17]         258,048
     BatchNorm2d-316          [-1, 192, 17, 17]             384
            ReLU-317          [-1, 192, 17, 17]               0
     BasicConv2d-318          [-1, 192, 17, 17]               0
          Conv2d-319            [-1, 192, 8, 8]         331,776
     BatchNorm2d-320            [-1, 192, 8, 8]             384
            ReLU-321            [-1, 192, 8, 8]               0
     BasicConv2d-322            [-1, 192, 8, 8]               0
       MaxPool2d-323            [-1, 768, 8, 8]               0
      ReductionB-324           [-1, 1280, 8, 8]               0
          Conv2d-325            [-1, 320, 8, 8]         409,600
     BatchNorm2d-326            [-1, 320, 8, 8]             640
            ReLU-327            [-1, 320, 8, 8]               0
     BasicConv2d-328            [-1, 320, 8, 8]               0
          Conv2d-329            [-1, 384, 8, 8]         491,520
     BatchNorm2d-330            [-1, 384, 8, 8]             768
            ReLU-331            [-1, 384, 8, 8]               0
     BasicConv2d-332            [-1, 384, 8, 8]               0
          Conv2d-333            [-1, 384, 8, 8]         442,368
     BatchNorm2d-334            [-1, 384, 8, 8]             768
            ReLU-335            [-1, 384, 8, 8]               0
     BasicConv2d-336            [-1, 384, 8, 8]               0
          Conv2d-337            [-1, 384, 8, 8]         442,368
     BatchNorm2d-338            [-1, 384, 8, 8]             768
            ReLU-339            [-1, 384, 8, 8]               0
     BasicConv2d-340            [-1, 384, 8, 8]               0
          Conv2d-341            [-1, 448, 8, 8]         573,440
     BatchNorm2d-342            [-1, 448, 8, 8]             896
            ReLU-343            [-1, 448, 8, 8]               0
     BasicConv2d-344            [-1, 448, 8, 8]               0
          Conv2d-345            [-1, 384, 8, 8]       1,548,288
     BatchNorm2d-346            [-1, 384, 8, 8]             768
            ReLU-347            [-1, 384, 8, 8]               0
     BasicConv2d-348            [-1, 384, 8, 8]               0
          Conv2d-349            [-1, 384, 8, 8]         442,368
     BatchNorm2d-350            [-1, 384, 8, 8]             768
            ReLU-351            [-1, 384, 8, 8]               0
     BasicConv2d-352            [-1, 384, 8, 8]               0
          Conv2d-353            [-1, 384, 8, 8]         442,368
     BatchNorm2d-354            [-1, 384, 8, 8]             768
            ReLU-355            [-1, 384, 8, 8]               0
     BasicConv2d-356            [-1, 384, 8, 8]               0
       AvgPool2d-357           [-1, 1280, 8, 8]               0
          Conv2d-358            [-1, 192, 8, 8]         245,760
     BatchNorm2d-359            [-1, 192, 8, 8]             384
            ReLU-360            [-1, 192, 8, 8]               0
     BasicConv2d-361            [-1, 192, 8, 8]               0
      InceptionC-362           [-1, 2048, 8, 8]               0
          Conv2d-363            [-1, 320, 8, 8]         655,360
     BatchNorm2d-364            [-1, 320, 8, 8]             640
            ReLU-365            [-1, 320, 8, 8]               0
     BasicConv2d-366            [-1, 320, 8, 8]               0
          Conv2d-367            [-1, 384, 8, 8]         786,432
     BatchNorm2d-368            [-1, 384, 8, 8]             768
            ReLU-369            [-1, 384, 8, 8]               0
     BasicConv2d-370            [-1, 384, 8, 8]               0
          Conv2d-371            [-1, 384, 8, 8]         442,368
     BatchNorm2d-372            [-1, 384, 8, 8]             768
            ReLU-373            [-1, 384, 8, 8]               0
     BasicConv2d-374            [-1, 384, 8, 8]               0
          Conv2d-375            [-1, 384, 8, 8]         442,368
     BatchNorm2d-376            [-1, 384, 8, 8]             768
            ReLU-377            [-1, 384, 8, 8]               0
     BasicConv2d-378            [-1, 384, 8, 8]               0
          Conv2d-379            [-1, 448, 8, 8]         917,504
     BatchNorm2d-380            [-1, 448, 8, 8]             896
            ReLU-381            [-1, 448, 8, 8]               0
     BasicConv2d-382            [-1, 448, 8, 8]               0
          Conv2d-383            [-1, 384, 8, 8]       1,548,288
     BatchNorm2d-384            [-1, 384, 8, 8]             768
            ReLU-385            [-1, 384, 8, 8]               0
     BasicConv2d-386            [-1, 384, 8, 8]               0
          Conv2d-387            [-1, 384, 8, 8]         442,368
     BatchNorm2d-388            [-1, 384, 8, 8]             768
            ReLU-389            [-1, 384, 8, 8]               0
     BasicConv2d-390            [-1, 384, 8, 8]               0
          Conv2d-391            [-1, 384, 8, 8]         442,368
     BatchNorm2d-392            [-1, 384, 8, 8]             768
            ReLU-393            [-1, 384, 8, 8]               0
     BasicConv2d-394            [-1, 384, 8, 8]               0
       AvgPool2d-395           [-1, 2048, 8, 8]               0
          Conv2d-396            [-1, 192, 8, 8]         393,216
     BatchNorm2d-397            [-1, 192, 8, 8]             384
            ReLU-398            [-1, 192, 8, 8]               0
     BasicConv2d-399            [-1, 192, 8, 8]               0
      InceptionC-400           [-1, 2048, 8, 8]               0
          Linear-401                    [-1, 4]           8,196
================================================================
Total params: 21,793,764
Trainable params: 21,793,764
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 1.02
Forward/backward pass size (MB): 313.22
Params size (MB): 83.14
Estimated Total Size (MB): 397.38
----------------------------------------------------------------
InceptionV3(
  (stem): Sequential(
    (0): BasicConv2d(
      (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
      (norm): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (1): BasicConv2d(
      (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), bias=False)
      (norm): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): BasicConv2d(
      (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): BasicConv2d(
      (conv): Conv2d(64, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (norm): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): BasicConv2d(
      (conv): Conv2d(80, 192, kernel_size=(3, 3), stride=(1, 1), bias=False)
      (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (6): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (block1): Sequential(
    (0): InceptionA(
      (conv1): BasicConv2d(
        (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (1): InceptionA(
      (conv1): BasicConv2d(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (2): InceptionA(
      (conv1): BasicConv2d(
        (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(48, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
  )
  (block2): Sequential(
    (0): ReductionA(
      (conv1): BasicConv2d(
        (conv): Conv2d(288, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), bias=False)
          (norm): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): InceptionB(
      (conv1): BasicConv2d(
        (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (3): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (4): BasicConv2d(
          (conv): Conv2d(128, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (2): InceptionB(
      (conv1): BasicConv2d(
        (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(160, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (3): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (4): BasicConv2d(
          (conv): Conv2d(160, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (3): InceptionB(
      (conv1): BasicConv2d(
        (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(160, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (3): BasicConv2d(
          (conv): Conv2d(160, 160, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (4): BasicConv2d(
          (conv): Conv2d(160, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (4): InceptionB(
      (conv1): BasicConv2d(
        (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (3): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (4): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
  )
  (block3): Sequential(
    (0): ReductionB(
      (conv1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 320, kernel_size=(3, 3), stride=(2, 2), bias=False)
          (norm): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (3): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    )
    (1): InceptionC(
      (conv1): BasicConv2d(
        (conv): Conv2d(1280, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): BasicConv2d(
        (conv): Conv2d(1280, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2a): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2b): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1280, 448, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(448, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(448, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3a): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv3b): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(1280, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
    (2): InceptionC(
      (conv1): BasicConv2d(
        (conv): Conv2d(2048, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2): BasicConv2d(
        (conv): Conv2d(2048, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2a): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv2b): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv3): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(2048, 448, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(448, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
        (1): BasicConv2d(
          (conv): Conv2d(448, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
      (conv3a): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv3b): BasicConv2d(
        (conv): Conv2d(384, 384, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
        (norm): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
      )
      (conv4): Sequential(
        (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
        (1): BasicConv2d(
          (conv): Conv2d(2048, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (norm): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU(inplace=True)
        )
      )
    )
  )
  (fc): Linear(in_features=2048, out_features=4, bias=True)
)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值