GoogLeNet网络一维、二维复现pytorch

GoogLeNet复现

2014年诞生了两个大名鼎鼎的网络,一个是VGG另一个就是GoogLeNet,直到包括VGG网络之前,模型的一直都是再纵向上改变,而GoogLeNet在增加模型深度的同时做了宽度上的开拓,并将拥有不同尺寸卷积核的卷积层的输出结果,横向拼接到一起,同时关注不同尺寸的特征。



LeNet-AlexNet-ZFNet: LeNet-AlexNet-ZFNet一二维复现pytorch
VGG: VGG一二维复现pytorch
GoogLeNet: GoogLeNet一二维复现pytorch
ResNet: ResNet残差网络一二维复现pytorch-含残差块复现思路分析
DenseNet: DenseNet一二维复现pytorch
Squeeze: SqueezeNet一二维复现pytorch
MobileNet: |从零搭建网络| MobileNet系列网络详解及搭建(学弟提供)
Mnasnet: |从零搭建网络| Mnasnet网络详解及pytorch搭建(学弟提供)
ShuffleNet: |从零搭建网络| ShuffleNet系列网络详解及搭建(学弟提供)
EfficientNet: |从零搭建网络| EfficientNet网络详解及搭建(学弟提供)

下面的是我复现了所有一维卷积神经网络经典模型的链接地址
链接: https://github.com/StChenHaoGitHub/1D-deeplearning-model-pytorch.git

一维模型训练模板代码自己编写已开源
https://github.com/StChenHaoGitHub/1D_Pytorch_Train_demo.git
训练代码讲解博客地址
在这里插入图片描述

GoogLeNet原文链接: Going deeper with convolutions

网络结构图如下,如图一共有9个Inception结构还有3个分类器,由于有三个分类器,最终会把每第一第二分类器的损失乘以0.3加到最终第三个分类器输出的损失上,复现GoogLeNet比麻烦的是这里需要写一个新的损失函数,而其他的复现当中,都是直接用最后一层的损失训练,没有完全复现。

在这里插入图片描述

Inception结构

其中Inception结构论文里也给了两种方式
在这里插入图片描述
a是原始版本,b是减少参数量的版本,a几乎可以说是只有理论上存在,实际使用的都是b版本。接下来开始讨论b版本的inception结构。

Inception有5个部分需要7个参数
1.输入通道数
2.最左侧的卷积核大小为1的卷积的输出通道数
3.从左数第二个分支的两个卷积层中间的过渡通道数和输出通道数
4.从右数第二个分支的两个卷积层中间的过渡通道数和输出通道数
5.最右侧池化层及其链接的一维卷积层的输出通道数

class Inception(torch.nn.Module):
    def __init__(self,in_channels=56,ch1=64,ch3_reduce=96,ch3=128,ch5_reduce=16,ch5=32,pool_proj=32):
        super(Inception, self).__init__()

        self.branch1 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels,ch1,kernel_size=1),
            torch.nn.BatchNorm1d(ch1)
        )

        self.branch3 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch3_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch3_reduce),
            torch.nn.Conv1d(ch3_reduce, ch3, kernel_size=3, padding=1),
            torch.nn.BatchNorm1d(ch3),
        )

        self.branch5 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch5_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch5_reduce),
            torch.nn.Conv1d(ch5_reduce, ch5, kernel_size=5, padding=2),
            torch.nn.BatchNorm1d(ch5),
        )

        self.branch_pool = torch.nn.Sequential(
            torch.nn.MaxPool1d(kernel_size=3,stride=1,padding=1),
            torch.nn.Conv1d(in_channels, pool_proj, kernel_size=1)
        )

    def forward(self,x):
        return torch.cat([self.branch1(x),self.branch3(x),self.branch5(x),self.branch_pool(x)],1)

最后将每一个分支输出的在第二个维度拼接到一起,也就是维度1。批次是维度0,样本点是维度2。

GoogLeNet网络

下面是论文中给出的网络的参数列表
在这里插入图片描述
其中reduce参数就是过渡通道数,根据论文中所给的参数表,加上三个分类器复现代码如下

一维GoogLeNet网络模型实现

import torch

class Inception(torch.nn.Module):
    def __init__(self,in_channels=56,ch1=64,ch3_reduce=96,ch3=128,ch5_reduce=16,ch5=32,pool_proj=32):
        super(Inception, self).__init__()

        self.branch1 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels,ch1,kernel_size=1),
            torch.nn.BatchNorm1d(ch1)
        )

        self.branch3 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch3_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch3_reduce),
            torch.nn.Conv1d(ch3_reduce, ch3, kernel_size=3, padding=1),
            torch.nn.BatchNorm1d(ch3),
        )

        self.branch5 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch5_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch5_reduce),
            torch.nn.Conv1d(ch5_reduce, ch5, kernel_size=5, padding=2),
            torch.nn.BatchNorm1d(ch5),
        )

        self.branch_pool = torch.nn.Sequential(
            torch.nn.MaxPool1d(kernel_size=3,stride=1,padding=1),
            torch.nn.Conv1d(in_channels, pool_proj, kernel_size=1)
        )

    def forward(self,x):
        return torch.cat([self.branch1(x),self.branch3(x),self.branch5(x),self.branch_pool(x)],1)



class GoogLeNet(torch.nn.Module):
    def __init__(self,in_channels=2,in_sample_points=224,classes=5):
        super(GoogLeNet, self).__init__()

        self.features=torch.nn.Sequential(
            torch.nn.Linear(in_sample_points,224),
            torch.nn.Conv1d(in_channels,64,kernel_size=7,stride=2,padding=3),
            torch.nn.MaxPool1d(3,2,padding=1),
            torch.nn.Conv1d(64,192,3,padding=1),
            torch.nn.MaxPool1d(3,2,padding=1),
            Inception(192,64,96,128,16,32,32),
            Inception(256,128,128,192,32,96,64),
            torch.nn.MaxPool1d(3,2,padding=1),
            Inception(480,192,96,208,16,48,64),
        )

        self.classifer_max_pool = torch.nn.MaxPool1d(5,3)

        self.classifer = torch.nn.Sequential(
            torch.nn.Linear(2048,1024),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1024,512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512,classes),
        )

        self.Inception_4b = Inception(512,160,112,224,24,64,64)
        self.Inception_4c = Inception(512,128,128,256,24,64,64)
        self.Inception_4d = Inception(512,112,144,288,32,64,64)


        self.classifer1 = torch.nn.Sequential(
            torch.nn.Linear(2112,1056),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1056,528),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(528,classes),
        )

        self.Inception_4e = Inception(528,256,160,320,32,128,128)
        self.max_pool = torch.nn.MaxPool1d(3,2,1)

        self.Inception_5a = Inception(832,256,160,320,32,128,128)
        self.Inception_5b = Inception(832,384,192,384,48,128,128)

        self.avg_pool = torch.nn.AvgPool1d(7,stride=1)
        self.dropout = torch.nn.Dropout(0.4)
        self.classifer2 = torch.nn.Sequential(
            torch.nn.Linear(1024, 512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512, classes),
        )


    def forward(self,x):
        x = self.features(x)

        y = self.classifer(self.classifer_max_pool(x).view(-1,2048))

        x = self.Inception_4b(x)
        x = self.Inception_4c(x)
        x = self.Inception_4d(x)

        y1 = self.classifer1(self.classifer_max_pool(x).view(-1,2112))

        x = self.Inception_4e(x)
        x = self.max_pool(x)
        x = self.Inception_5a(x)
        x = self.Inception_5b(x)
        x = self.avg_pool(x)
        x = self.dropout(x)
        x = x.view(-1,1024)
        x = self.classifer2(x)

        return x,y,y1


其中x为softmax2分类器输出y1为softmax1分类器输出y为softmax0分类器输出
这里比较特殊的就是需要返回三个输出因此需要配套的损失函数。

GoogLeNet专属损失函数

class GoogLeNetLoss(torch.nn.Module):
    def __init__(self):
        super(GoogLeNetLoss, self).__init__()
        self.CrossEntropyLoss = torch.nn.CrossEntropyLoss()

    def forward(self,data,label):
        c2_loss = self.CrossEntropyLoss(data[0],label)
        c0_loss = self.CrossEntropyLoss(data[1],label)
        c1_loss = self.CrossEntropyLoss(data[2],label)

        loss = c2_loss + 0.3*(c0_loss+c1_loss)

        return loss

完整代码

import torch
from torchsummary import summary

class Inception(torch.nn.Module):
    def __init__(self,in_channels=56,ch1=64,ch3_reduce=96,ch3=128,ch5_reduce=16,ch5=32,pool_proj=32):
        super(Inception, self).__init__()

        self.branch1 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels,ch1,kernel_size=1),
            torch.nn.BatchNorm1d(ch1)
        )

        self.branch3 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch3_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch3_reduce),
            torch.nn.Conv1d(ch3_reduce, ch3, kernel_size=3, padding=1),
            torch.nn.BatchNorm1d(ch3),
        )

        self.branch5 = torch.nn.Sequential(
            torch.nn.Conv1d(in_channels, ch5_reduce, kernel_size=1),
            torch.nn.BatchNorm1d(ch5_reduce),
            torch.nn.Conv1d(ch5_reduce, ch5, kernel_size=5, padding=2),
            torch.nn.BatchNorm1d(ch5),
        )

        self.branch_pool = torch.nn.Sequential(
            torch.nn.MaxPool1d(kernel_size=3,stride=1,padding=1),
            torch.nn.Conv1d(in_channels, pool_proj, kernel_size=1)
        )

    def forward(self,x):
        return torch.cat([self.branch1(x),self.branch3(x),self.branch5(x),self.branch_pool(x)],1)



class GoogLeNet(torch.nn.Module):
    def __init__(self,in_channels=2,in_sample_points=224,classes=5):
        super(GoogLeNet, self).__init__()

        self.features=torch.nn.Sequential(
            torch.nn.Linear(in_sample_points,224),
            torch.nn.Conv1d(in_channels,64,kernel_size=7,stride=2,padding=3),
            torch.nn.MaxPool1d(3,2,padding=1),
            torch.nn.Conv1d(64,192,3,padding=1),
            torch.nn.MaxPool1d(3,2,padding=1),
            Inception(192,64,96,128,16,32,32),
            Inception(256,128,128,192,32,96,64),
            torch.nn.MaxPool1d(3,2,padding=1),
            Inception(480,192,96,208,16,48,64),
        )



        self.classifer_max_pool = torch.nn.MaxPool1d(5,3)

        self.classifer = torch.nn.Sequential(
            torch.nn.Linear(2048,1024),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1024,512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512,classes),
        )

        self.Inception_4b = Inception(512,160,112,224,24,64,64)
        self.Inception_4c = Inception(512,128,128,256,24,64,64)
        self.Inception_4d = Inception(512,112,144,288,32,64,64)


        self.classifer1 = torch.nn.Sequential(
            torch.nn.Linear(2112,1056),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1056,528),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(528,classes),
        )

        self.Inception_4e = Inception(528,256,160,320,32,128,128)
        self.max_pool = torch.nn.MaxPool1d(3,2,1)

        self.Inception_5a = Inception(832,256,160,320,32,128,128)
        self.Inception_5b = Inception(832,384,192,384,48,128,128)

        self.avg_pool = torch.nn.AvgPool1d(7,stride=1)
        self.dropout = torch.nn.Dropout(0.4)
        self.classifer2 = torch.nn.Sequential(
            torch.nn.Linear(1024, 512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512, classes),
        )


    def forward(self,x):
        x = self.features(x)

        y = self.classifer(self.classifer_max_pool(x).view(-1,2048))

        x = self.Inception_4b(x)
        x = self.Inception_4c(x)
        x = self.Inception_4d(x)

        y1 = self.classifer1(self.classifer_max_pool(x).view(-1,2112))

        x = self.Inception_4e(x)
        x = self.max_pool(x)
        x = self.Inception_5a(x)
        x = self.Inception_5b(x)
        x = self.avg_pool(x)
        x = self.dropout(x)
        x = x.view(-1,1024)
        x = self.classifer2(x)

        return x,y,y1

class GoogLeNetLoss(torch.nn.Module):
    def __init__(self):
        super(GoogLeNetLoss, self).__init__()
        self.CrossEntropyLoss = torch.nn.CrossEntropyLoss()

    def forward(self,data,label):
        c2_loss = self.CrossEntropyLoss(data[0],label)
        c0_loss = self.CrossEntropyLoss(data[1],label)
        c1_loss = self.CrossEntropyLoss(data[2],label)

        loss = c2_loss + 0.3*(c0_loss+c1_loss)

        return loss




if __name__ == '__main__':
    model = GoogLeNet()
    input = torch.randn(size=(2,2,224))
    # [c2,c0,c1] = model(input)
    output = model(input)
    criterion = GoogLeNetLoss()
    label = torch.tensor([1,0])
    print(f"损失为:{criterion(output,label)}")
    print(f"输出结果为{output}")
    print(model)
    summary(model=model, input_size=(2, 224), device='cpu')

输出结果如下

损失为:2.53948974609375
输出结果为(tensor([[-0.0974, -0.0311,  0.0815, -0.0201, -0.1416],
        [ 0.3268, -0.1535, -0.0960, -0.0373,  0.0472]],
       grad_fn=<AddmmBackward>), tensor([[ 0.2001, -0.4010, -0.0270, -0.5973,  0.4724],
        [-0.0031, -0.0116,  0.2749, -0.0630, -0.1351]],
       grad_fn=<AddmmBackward>), tensor([[ 0.0689, -0.1227,  0.3872, -0.3770, -0.0234],
        [ 0.0773,  0.8251,  0.1869, -0.2420,  0.2121]],
       grad_fn=<AddmmBackward>))
GoogLeNet(
  (features): Sequential(
    (0): Linear(in_features=224, out_features=224, bias=True)
    (1): Conv1d(2, 64, kernel_size=(7,), stride=(2,), padding=(3,))
    (2): MaxPool1d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (3): Conv1d(64, 192, kernel_size=(3,), stride=(1,), padding=(1,))
    (4): MaxPool1d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (5): Inception(
      (branch1): Sequential(
        (0): Conv1d(192, 64, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv1d(192, 96, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(96, 128, kernel_size=(3,), stride=(1,), padding=(1,))
        (3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv1d(192, 16, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(16, 32, kernel_size=(5,), stride=(1,), padding=(2,))
        (3): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv1d(192, 32, kernel_size=(1,), stride=(1,))
      )
    )
    (6): Inception(
      (branch1): Sequential(
        (0): Conv1d(256, 128, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv1d(256, 128, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(128, 192, kernel_size=(3,), stride=(1,), padding=(1,))
        (3): BatchNorm1d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv1d(256, 32, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(32, 96, kernel_size=(5,), stride=(1,), padding=(2,))
        (3): BatchNorm1d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv1d(256, 64, kernel_size=(1,), stride=(1,))
      )
    )
    (7): MaxPool1d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (8): Inception(
      (branch1): Sequential(
        (0): Conv1d(480, 192, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv1d(480, 96, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(96, 208, kernel_size=(3,), stride=(1,), padding=(1,))
        (3): BatchNorm1d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv1d(480, 16, kernel_size=(1,), stride=(1,))
        (1): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv1d(16, 48, kernel_size=(5,), stride=(1,), padding=(2,))
        (3): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv1d(480, 64, kernel_size=(1,), stride=(1,))
      )
    )
  )
  (classifer_max_pool): MaxPool1d(kernel_size=5, stride=3, padding=0, dilation=1, ceil_mode=False)
  (classifer): Sequential(
    (0): Linear(in_features=2048, out_features=1024, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=1024, out_features=512, bias=True)
    (4): Dropout(p=0.5, inplace=False)
    (5): ReLU()
    (6): Linear(in_features=512, out_features=5, bias=True)
  )
  (Inception_4b): Inception(
    (branch1): Sequential(
      (0): Conv1d(512, 160, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(512, 112, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(112, 224, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(512, 24, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(24, 64, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(512, 64, kernel_size=(1,), stride=(1,))
    )
  )
  (Inception_4c): Inception(
    (branch1): Sequential(
      (0): Conv1d(512, 128, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(512, 128, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(128, 256, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(512, 24, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(24, 64, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(512, 64, kernel_size=(1,), stride=(1,))
    )
  )
  (Inception_4d): Inception(
    (branch1): Sequential(
      (0): Conv1d(512, 112, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(512, 144, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(144, 288, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(512, 32, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(32, 64, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(512, 64, kernel_size=(1,), stride=(1,))
    )
  )
  (classifer1): Sequential(
    (0): Linear(in_features=2112, out_features=1056, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=1056, out_features=528, bias=True)
    (4): Dropout(p=0.5, inplace=False)
    (5): ReLU()
    (6): Linear(in_features=528, out_features=5, bias=True)
  )
  (Inception_4e): Inception(
    (branch1): Sequential(
      (0): Conv1d(528, 256, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(528, 160, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(160, 320, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(528, 32, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(32, 128, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(528, 128, kernel_size=(1,), stride=(1,))
    )
  )
  (max_pool): MaxPool1d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (Inception_5a): Inception(
    (branch1): Sequential(
      (0): Conv1d(832, 256, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(832, 160, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(160, 320, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(832, 32, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(32, 128, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(832, 128, kernel_size=(1,), stride=(1,))
    )
  )
  (Inception_5b): Inception(
    (branch1): Sequential(
      (0): Conv1d(832, 384, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv1d(832, 192, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(192, 384, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): BatchNorm1d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv1d(832, 48, kernel_size=(1,), stride=(1,))
      (1): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv1d(48, 128, kernel_size=(5,), stride=(1,), padding=(2,))
      (3): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv1d(832, 128, kernel_size=(1,), stride=(1,))
    )
  )
  (avg_pool): AvgPool1d(kernel_size=(7,), stride=(1,), padding=(0,))
  (dropout): Dropout(p=0.4, inplace=False)
  (classifer2): Sequential(
    (0): Linear(in_features=1024, out_features=512, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=512, out_features=5, bias=True)
  )
)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Linear-1               [-1, 2, 224]          50,400
            Conv1d-2              [-1, 64, 112]             960
         MaxPool1d-3               [-1, 64, 56]               0
            Conv1d-4              [-1, 192, 56]          37,056
         MaxPool1d-5              [-1, 192, 28]               0
            Conv1d-6               [-1, 64, 28]          12,352
       BatchNorm1d-7               [-1, 64, 28]             128
            Conv1d-8               [-1, 96, 28]          18,528
       BatchNorm1d-9               [-1, 96, 28]             192
           Conv1d-10              [-1, 128, 28]          36,992
      BatchNorm1d-11              [-1, 128, 28]             256
           Conv1d-12               [-1, 16, 28]           3,088
      BatchNorm1d-13               [-1, 16, 28]              32
           Conv1d-14               [-1, 32, 28]           2,592
      BatchNorm1d-15               [-1, 32, 28]              64
        MaxPool1d-16              [-1, 192, 28]               0
           Conv1d-17               [-1, 32, 28]           6,176
        Inception-18              [-1, 256, 28]               0
           Conv1d-19              [-1, 128, 28]          32,896
      BatchNorm1d-20              [-1, 128, 28]             256
           Conv1d-21              [-1, 128, 28]          32,896
      BatchNorm1d-22              [-1, 128, 28]             256
           Conv1d-23              [-1, 192, 28]          73,920
      BatchNorm1d-24              [-1, 192, 28]             384
           Conv1d-25               [-1, 32, 28]           8,224
      BatchNorm1d-26               [-1, 32, 28]              64
           Conv1d-27               [-1, 96, 28]          15,456
      BatchNorm1d-28               [-1, 96, 28]             192
        MaxPool1d-29              [-1, 256, 28]               0
           Conv1d-30               [-1, 64, 28]          16,448
        Inception-31              [-1, 480, 28]               0
        MaxPool1d-32              [-1, 480, 14]               0
           Conv1d-33              [-1, 192, 14]          92,352
      BatchNorm1d-34              [-1, 192, 14]             384
           Conv1d-35               [-1, 96, 14]          46,176
      BatchNorm1d-36               [-1, 96, 14]             192
           Conv1d-37              [-1, 208, 14]          60,112
      BatchNorm1d-38              [-1, 208, 14]             416
           Conv1d-39               [-1, 16, 14]           7,696
      BatchNorm1d-40               [-1, 16, 14]              32
           Conv1d-41               [-1, 48, 14]           3,888
      BatchNorm1d-42               [-1, 48, 14]              96
        MaxPool1d-43              [-1, 480, 14]               0
           Conv1d-44               [-1, 64, 14]          30,784
        Inception-45              [-1, 512, 14]               0
        MaxPool1d-46               [-1, 512, 4]               0
           Linear-47                 [-1, 1024]       2,098,176
          Dropout-48                 [-1, 1024]               0
             ReLU-49                 [-1, 1024]               0
           Linear-50                  [-1, 512]         524,800
          Dropout-51                  [-1, 512]               0
             ReLU-52                  [-1, 512]               0
           Linear-53                    [-1, 5]           2,565
           Conv1d-54              [-1, 160, 14]          82,080
      BatchNorm1d-55              [-1, 160, 14]             320
           Conv1d-56              [-1, 112, 14]          57,456
      BatchNorm1d-57              [-1, 112, 14]             224
           Conv1d-58              [-1, 224, 14]          75,488
      BatchNorm1d-59              [-1, 224, 14]             448
           Conv1d-60               [-1, 24, 14]          12,312
      BatchNorm1d-61               [-1, 24, 14]              48
           Conv1d-62               [-1, 64, 14]           7,744
      BatchNorm1d-63               [-1, 64, 14]             128
        MaxPool1d-64              [-1, 512, 14]               0
           Conv1d-65               [-1, 64, 14]          32,832
        Inception-66              [-1, 512, 14]               0
           Conv1d-67              [-1, 128, 14]          65,664
      BatchNorm1d-68              [-1, 128, 14]             256
           Conv1d-69              [-1, 128, 14]          65,664
      BatchNorm1d-70              [-1, 128, 14]             256
           Conv1d-71              [-1, 256, 14]          98,560
      BatchNorm1d-72              [-1, 256, 14]             512
           Conv1d-73               [-1, 24, 14]          12,312
      BatchNorm1d-74               [-1, 24, 14]              48
           Conv1d-75               [-1, 64, 14]           7,744
      BatchNorm1d-76               [-1, 64, 14]             128
        MaxPool1d-77              [-1, 512, 14]               0
           Conv1d-78               [-1, 64, 14]          32,832
        Inception-79              [-1, 512, 14]               0
           Conv1d-80              [-1, 112, 14]          57,456
      BatchNorm1d-81              [-1, 112, 14]             224
           Conv1d-82              [-1, 144, 14]          73,872
      BatchNorm1d-83              [-1, 144, 14]             288
           Conv1d-84              [-1, 288, 14]         124,704
      BatchNorm1d-85              [-1, 288, 14]             576
           Conv1d-86               [-1, 32, 14]          16,416
      BatchNorm1d-87               [-1, 32, 14]              64
           Conv1d-88               [-1, 64, 14]          10,304
      BatchNorm1d-89               [-1, 64, 14]             128
        MaxPool1d-90              [-1, 512, 14]               0
           Conv1d-91               [-1, 64, 14]          32,832
        Inception-92              [-1, 528, 14]               0
        MaxPool1d-93               [-1, 528, 4]               0
           Linear-94                 [-1, 1056]       2,231,328
          Dropout-95                 [-1, 1056]               0
             ReLU-96                 [-1, 1056]               0
           Linear-97                  [-1, 528]         558,096
          Dropout-98                  [-1, 528]               0
             ReLU-99                  [-1, 528]               0
          Linear-100                    [-1, 5]           2,645
          Conv1d-101              [-1, 256, 14]         135,424
     BatchNorm1d-102              [-1, 256, 14]             512
          Conv1d-103              [-1, 160, 14]          84,640
     BatchNorm1d-104              [-1, 160, 14]             320
          Conv1d-105              [-1, 320, 14]         153,920
     BatchNorm1d-106              [-1, 320, 14]             640
          Conv1d-107               [-1, 32, 14]          16,928
     BatchNorm1d-108               [-1, 32, 14]              64
          Conv1d-109              [-1, 128, 14]          20,608
     BatchNorm1d-110              [-1, 128, 14]             256
       MaxPool1d-111              [-1, 528, 14]               0
          Conv1d-112              [-1, 128, 14]          67,712
       Inception-113              [-1, 832, 14]               0
       MaxPool1d-114               [-1, 832, 7]               0
          Conv1d-115               [-1, 256, 7]         213,248
     BatchNorm1d-116               [-1, 256, 7]             512
          Conv1d-117               [-1, 160, 7]         133,280
     BatchNorm1d-118               [-1, 160, 7]             320
          Conv1d-119               [-1, 320, 7]         153,920
     BatchNorm1d-120               [-1, 320, 7]             640
          Conv1d-121                [-1, 32, 7]          26,656
     BatchNorm1d-122                [-1, 32, 7]              64
          Conv1d-123               [-1, 128, 7]          20,608
     BatchNorm1d-124               [-1, 128, 7]             256
       MaxPool1d-125               [-1, 832, 7]               0
          Conv1d-126               [-1, 128, 7]         106,624
       Inception-127               [-1, 832, 7]               0
          Conv1d-128               [-1, 384, 7]         319,872
     BatchNorm1d-129               [-1, 384, 7]             768
          Conv1d-130               [-1, 192, 7]         159,936
     BatchNorm1d-131               [-1, 192, 7]             384
          Conv1d-132               [-1, 384, 7]         221,568
     BatchNorm1d-133               [-1, 384, 7]             768
          Conv1d-134                [-1, 48, 7]          39,984
     BatchNorm1d-135                [-1, 48, 7]              96
          Conv1d-136               [-1, 128, 7]          30,848
     BatchNorm1d-137               [-1, 128, 7]             256
       MaxPool1d-138               [-1, 832, 7]               0
          Conv1d-139               [-1, 128, 7]         106,624
       Inception-140              [-1, 1024, 7]               0
       AvgPool1d-141              [-1, 1024, 1]               0
         Dropout-142              [-1, 1024, 1]               0
          Linear-143                  [-1, 512]         524,800
         Dropout-144                  [-1, 512]               0
            ReLU-145                  [-1, 512]               0
          Linear-146                    [-1, 5]           2,565
================================================================
Total params: 9,425,087
Trainable params: 9,425,087
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 2.84
Params size (MB): 35.95
Estimated Total Size (MB): 38.79
----------------------------------------------------------------

Process finished with exit code 0


二维GoogleNet网络模型代码实现

import torch
from torchsummary import summary

class Inception(torch.nn.Module):
    def __init__(self,in_channels=56,ch1=64,ch3_reduce=96,ch3=128,ch5_reduce=16,ch5=32,pool_proj=32):
        super(Inception, self).__init__()

        self.branch1 = torch.nn.Sequential(
            torch.nn.Conv2d(in_channels,ch1,kernel_size=1),
            torch.nn.BatchNorm2d(ch1)
        )

        self.branch3 = torch.nn.Sequential(
            torch.nn.Conv2d(in_channels, ch3_reduce, kernel_size=1),
            torch.nn.BatchNorm2d(ch3_reduce),
            torch.nn.Conv2d(ch3_reduce, ch3, kernel_size=3, padding=1),
            torch.nn.BatchNorm2d(ch3)
        )

        self.branch5 = torch.nn.Sequential(
            torch.nn.Conv2d(in_channels, ch5_reduce, kernel_size=1),
            torch.nn.BatchNorm2d(ch5_reduce),
            torch.nn.Conv2d(ch5_reduce, ch5, kernel_size=5, padding=2),
            torch.nn.BatchNorm2d(ch5)
        )

        self.branch_pool = torch.nn.Sequential(
            torch.nn.MaxPool2d(kernel_size=3,stride=1,padding=1),
            torch.nn.Conv2d(in_channels, pool_proj, kernel_size=1)
        )

    def forward(self,x):
        return torch.cat([self.branch1(x),self.branch3(x),self.branch5(x),self.branch_pool(x)],1)



class GoogLeNet(torch.nn.Module):
    def __init__(self,in_channels=3,in_sample_points=224,classes=5):
        super(GoogLeNet, self).__init__()

        self.features=torch.nn.Sequential(
            torch.nn.Linear(in_sample_points,224),
            torch.nn.Conv2d(in_channels,64,kernel_size=7,stride=2,padding=3),
            torch.nn.MaxPool2d(3,2,padding=1),
            torch.nn.Conv2d(64,192,3,padding=1),
            torch.nn.MaxPool2d(3,2,padding=1),
            Inception(192,64,96,128,16,32,32),
            Inception(256,128,128,192,32,96,64),
            torch.nn.MaxPool2d(3,2,padding=1),
            Inception(480,192,96,208,16,48,64)
        )



        self.classifer_max_pool = torch.nn.MaxPool2d(5,3)

        self.classifer = torch.nn.Sequential(
            torch.nn.Linear(8192,1024),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1024,512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512,classes)
        )

        self.Inception_4b = Inception(512,160,112,224,24,64,64)
        self.Inception_4c = Inception(512,128,128,256,24,64,64)
        self.Inception_4d = Inception(512,112,144,288,32,64,64)


        self.classifer1 = torch.nn.Sequential(
            torch.nn.Linear(8448,1056),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(1056,528),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(528,classes)
        )

        self.Inception_4e = Inception(528,256,160,320,32,128,128)
        self.max_pool = torch.nn.MaxPool2d(3,2,1)

        self.Inception_5a = Inception(832,256,160,320,32,128,128)
        self.Inception_5b = Inception(832,384,192,384,48,128,128)

        self.avg_pool = torch.nn.AvgPool2d(7,stride=1)
        self.dropout = torch.nn.Dropout(0.4)
        self.classifer2 = torch.nn.Sequential(
            torch.nn.Linear(1024, 512),
            torch.nn.Dropout(0.5),
            torch.nn.ReLU(),
            torch.nn.Linear(512, classes)
        )


    def forward(self,x):
        x = self.features(x)

        y = self.classifer(self.classifer_max_pool(x).view(-1,8192))

        x = self.Inception_4b(x)
        x = self.Inception_4c(x)
        x = self.Inception_4d(x)

        y1 = self.classifer1(self.classifer_max_pool(x).view(-1,8448))

        x = self.Inception_4e(x)
        x = self.max_pool(x)
        x = self.Inception_5a(x)
        x = self.Inception_5b(x)
        x = self.avg_pool(x)
        x = self.dropout(x)
        x = x.view(-1,1024)
        x = self.classifer2(x)

        return x

class GoogLeNetLoss(torch.nn.Module):
    def __init__(self):
        super(GoogLeNetLoss, self).__init__()
        self.CrossEntropyLoss = torch.nn.CrossEntropyLoss()

    def forward(self,data,label):
        c2_loss = self.CrossEntropyLoss(data[0],label)
        c0_loss = self.CrossEntropyLoss(data[1],label)
        c1_loss = self.CrossEntropyLoss(data[2],label)

        loss = c2_loss + 0.3*(c0_loss+c1_loss)

        return loss



if __name__ == '__main__':
    model = GoogLeNet(in_channels=3)
    x = torch.randn(size=(1,3,224,224))
    # [c2,c0,c1] = model(input)
    output = model(x)
    criterion = GoogLeNetLoss()
    # label = torch.tensor([1,0])
    # print(f"损失为:{criterion(output,label)}")
    print(f"输出结果为{output}")
    print(model)
    summary(model=model, input_size=(3, 224,224))

模型输出:

输出结果为tensor([[ 0.1532,  0.1498,  0.0414, -0.0266,  0.1378]],
       grad_fn=<AddmmBackward>)
GoogLeNet(
  (features): Sequential(
    (0): Linear(in_features=224, out_features=224, bias=True)
    (1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
    (2): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (3): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (4): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (5): Inception(
      (branch1): Sequential(
        (0): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (6): Inception(
      (branch1): Sequential(
        (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(32, 96, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (7): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (8): Inception(
      (branch1): Sequential(
        (0): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch3): Sequential(
        (0): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch5): Sequential(
        (0): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1))
        (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): Conv2d(16, 48, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (3): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (branch_pool): Sequential(
        (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
        (1): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1))
      )
    )
  )
  (classifer_max_pool): MaxPool2d(kernel_size=5, stride=3, padding=0, dilation=1, ceil_mode=False)
  (classifer): Sequential(
    (0): Linear(in_features=8192, out_features=1024, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=1024, out_features=512, bias=True)
    (4): Dropout(p=0.5, inplace=False)
    (5): ReLU()
    (6): Linear(in_features=512, out_features=5, bias=True)
  )
  (Inception_4b): Inception(
    (branch1): Sequential(
      (0): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (Inception_4c): Inception(
    (branch1): Sequential(
      (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (Inception_4d): Inception(
    (branch1): Sequential(
      (0): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (classifer1): Sequential(
    (0): Linear(in_features=8448, out_features=1056, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=1056, out_features=528, bias=True)
    (4): Dropout(p=0.5, inplace=False)
    (5): ReLU()
    (6): Linear(in_features=528, out_features=5, bias=True)
  )
  (Inception_4e): Inception(
    (branch1): Sequential(
      (0): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(32, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (Inception_5a): Inception(
    (branch1): Sequential(
      (0): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(32, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (Inception_5b): Inception(
    (branch1): Sequential(
      (0): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch3): Sequential(
      (0): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (3): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch5): Sequential(
      (0): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1))
      (1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): Conv2d(48, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch_pool): Sequential(
      (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
      (1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (avg_pool): AvgPool2d(kernel_size=7, stride=1, padding=0)
  (dropout): Dropout(p=0.4, inplace=False)
  (classifer2): Sequential(
    (0): Linear(in_features=1024, out_features=512, bias=True)
    (1): Dropout(p=0.5, inplace=False)
    (2): ReLU()
    (3): Linear(in_features=512, out_features=5, bias=True)
  )
)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Linear-1          [-1, 3, 224, 224]          50,400
            Conv2d-2         [-1, 64, 112, 112]           9,472
         MaxPool2d-3           [-1, 64, 56, 56]               0
            Conv2d-4          [-1, 192, 56, 56]         110,784
         MaxPool2d-5          [-1, 192, 28, 28]               0
            Conv2d-6           [-1, 64, 28, 28]          12,352
       BatchNorm2d-7           [-1, 64, 28, 28]             128
            Conv2d-8           [-1, 96, 28, 28]          18,528
       BatchNorm2d-9           [-1, 96, 28, 28]             192
           Conv2d-10          [-1, 128, 28, 28]         110,720
      BatchNorm2d-11          [-1, 128, 28, 28]             256
           Conv2d-12           [-1, 16, 28, 28]           3,088
      BatchNorm2d-13           [-1, 16, 28, 28]              32
           Conv2d-14           [-1, 32, 28, 28]          12,832
      BatchNorm2d-15           [-1, 32, 28, 28]              64
        MaxPool2d-16          [-1, 192, 28, 28]               0
           Conv2d-17           [-1, 32, 28, 28]           6,176
        Inception-18          [-1, 256, 28, 28]               0
           Conv2d-19          [-1, 128, 28, 28]          32,896
      BatchNorm2d-20          [-1, 128, 28, 28]             256
           Conv2d-21          [-1, 128, 28, 28]          32,896
      BatchNorm2d-22          [-1, 128, 28, 28]             256
           Conv2d-23          [-1, 192, 28, 28]         221,376
      BatchNorm2d-24          [-1, 192, 28, 28]             384
           Conv2d-25           [-1, 32, 28, 28]           8,224
      BatchNorm2d-26           [-1, 32, 28, 28]              64
           Conv2d-27           [-1, 96, 28, 28]          76,896
      BatchNorm2d-28           [-1, 96, 28, 28]             192
        MaxPool2d-29          [-1, 256, 28, 28]               0
           Conv2d-30           [-1, 64, 28, 28]          16,448
        Inception-31          [-1, 480, 28, 28]               0
        MaxPool2d-32          [-1, 480, 14, 14]               0
           Conv2d-33          [-1, 192, 14, 14]          92,352
      BatchNorm2d-34          [-1, 192, 14, 14]             384
           Conv2d-35           [-1, 96, 14, 14]          46,176
      BatchNorm2d-36           [-1, 96, 14, 14]             192
           Conv2d-37          [-1, 208, 14, 14]         179,920
      BatchNorm2d-38          [-1, 208, 14, 14]             416
           Conv2d-39           [-1, 16, 14, 14]           7,696
      BatchNorm2d-40           [-1, 16, 14, 14]              32
           Conv2d-41           [-1, 48, 14, 14]          19,248
      BatchNorm2d-42           [-1, 48, 14, 14]              96
        MaxPool2d-43          [-1, 480, 14, 14]               0
           Conv2d-44           [-1, 64, 14, 14]          30,784
        Inception-45          [-1, 512, 14, 14]               0
        MaxPool2d-46            [-1, 512, 4, 4]               0
           Linear-47                 [-1, 1024]       8,389,632
          Dropout-48                 [-1, 1024]               0
             ReLU-49                 [-1, 1024]               0
           Linear-50                  [-1, 512]         524,800
          Dropout-51                  [-1, 512]               0
             ReLU-52                  [-1, 512]               0
           Linear-53                    [-1, 5]           2,565
           Conv2d-54          [-1, 160, 14, 14]          82,080
      BatchNorm2d-55          [-1, 160, 14, 14]             320
           Conv2d-56          [-1, 112, 14, 14]          57,456
      BatchNorm2d-57          [-1, 112, 14, 14]             224
           Conv2d-58          [-1, 224, 14, 14]         226,016
      BatchNorm2d-59          [-1, 224, 14, 14]             448
           Conv2d-60           [-1, 24, 14, 14]          12,312
      BatchNorm2d-61           [-1, 24, 14, 14]              48
           Conv2d-62           [-1, 64, 14, 14]          38,464
      BatchNorm2d-63           [-1, 64, 14, 14]             128
        MaxPool2d-64          [-1, 512, 14, 14]               0
           Conv2d-65           [-1, 64, 14, 14]          32,832
        Inception-66          [-1, 512, 14, 14]               0
           Conv2d-67          [-1, 128, 14, 14]          65,664
      BatchNorm2d-68          [-1, 128, 14, 14]             256
           Conv2d-69          [-1, 128, 14, 14]          65,664
      BatchNorm2d-70          [-1, 128, 14, 14]             256
           Conv2d-71          [-1, 256, 14, 14]         295,168
      BatchNorm2d-72          [-1, 256, 14, 14]             512
           Conv2d-73           [-1, 24, 14, 14]          12,312
      BatchNorm2d-74           [-1, 24, 14, 14]              48
           Conv2d-75           [-1, 64, 14, 14]          38,464
      BatchNorm2d-76           [-1, 64, 14, 14]             128
        MaxPool2d-77          [-1, 512, 14, 14]               0
           Conv2d-78           [-1, 64, 14, 14]          32,832
        Inception-79          [-1, 512, 14, 14]               0
           Conv2d-80          [-1, 112, 14, 14]          57,456
      BatchNorm2d-81          [-1, 112, 14, 14]             224
           Conv2d-82          [-1, 144, 14, 14]          73,872
      BatchNorm2d-83          [-1, 144, 14, 14]             288
           Conv2d-84          [-1, 288, 14, 14]         373,536
      BatchNorm2d-85          [-1, 288, 14, 14]             576
           Conv2d-86           [-1, 32, 14, 14]          16,416
      BatchNorm2d-87           [-1, 32, 14, 14]              64
           Conv2d-88           [-1, 64, 14, 14]          51,264
      BatchNorm2d-89           [-1, 64, 14, 14]             128
        MaxPool2d-90          [-1, 512, 14, 14]               0
           Conv2d-91           [-1, 64, 14, 14]          32,832
        Inception-92          [-1, 528, 14, 14]               0
        MaxPool2d-93            [-1, 528, 4, 4]               0
           Linear-94                 [-1, 1056]       8,922,144
          Dropout-95                 [-1, 1056]               0
             ReLU-96                 [-1, 1056]               0
           Linear-97                  [-1, 528]         558,096
          Dropout-98                  [-1, 528]               0
             ReLU-99                  [-1, 528]               0
          Linear-100                    [-1, 5]           2,645
          Conv2d-101          [-1, 256, 14, 14]         135,424
     BatchNorm2d-102          [-1, 256, 14, 14]             512
          Conv2d-103          [-1, 160, 14, 14]          84,640
     BatchNorm2d-104          [-1, 160, 14, 14]             320
          Conv2d-105          [-1, 320, 14, 14]         461,120
     BatchNorm2d-106          [-1, 320, 14, 14]             640
          Conv2d-107           [-1, 32, 14, 14]          16,928
     BatchNorm2d-108           [-1, 32, 14, 14]              64
          Conv2d-109          [-1, 128, 14, 14]         102,528
     BatchNorm2d-110          [-1, 128, 14, 14]             256
       MaxPool2d-111          [-1, 528, 14, 14]               0
          Conv2d-112          [-1, 128, 14, 14]          67,712
       Inception-113          [-1, 832, 14, 14]               0
       MaxPool2d-114            [-1, 832, 7, 7]               0
          Conv2d-115            [-1, 256, 7, 7]         213,248
     BatchNorm2d-116            [-1, 256, 7, 7]             512
          Conv2d-117            [-1, 160, 7, 7]         133,280
     BatchNorm2d-118            [-1, 160, 7, 7]             320
          Conv2d-119            [-1, 320, 7, 7]         461,120
     BatchNorm2d-120            [-1, 320, 7, 7]             640
          Conv2d-121             [-1, 32, 7, 7]          26,656
     BatchNorm2d-122             [-1, 32, 7, 7]              64
          Conv2d-123            [-1, 128, 7, 7]         102,528
     BatchNorm2d-124            [-1, 128, 7, 7]             256
       MaxPool2d-125            [-1, 832, 7, 7]               0
          Conv2d-126            [-1, 128, 7, 7]         106,624
       Inception-127            [-1, 832, 7, 7]               0
          Conv2d-128            [-1, 384, 7, 7]         319,872
     BatchNorm2d-129            [-1, 384, 7, 7]             768
          Conv2d-130            [-1, 192, 7, 7]         159,936
     BatchNorm2d-131            [-1, 192, 7, 7]             384
          Conv2d-132            [-1, 384, 7, 7]         663,936
     BatchNorm2d-133            [-1, 384, 7, 7]             768
          Conv2d-134             [-1, 48, 7, 7]          39,984
     BatchNorm2d-135             [-1, 48, 7, 7]              96
          Conv2d-136            [-1, 128, 7, 7]         153,728
     BatchNorm2d-137            [-1, 128, 7, 7]             256
       MaxPool2d-138            [-1, 832, 7, 7]               0
          Conv2d-139            [-1, 128, 7, 7]         106,624
       Inception-140           [-1, 1024, 7, 7]               0
       AvgPool2d-141           [-1, 1024, 1, 1]               0
         Dropout-142           [-1, 1024, 1, 1]               0
          Linear-143                  [-1, 512]         524,800
         Dropout-144                  [-1, 512]               0
            ReLU-145                  [-1, 512]               0
          Linear-146                    [-1, 5]           2,565
================================================================
Total params: 24,959,487
Trainable params: 24,959,487
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 55.82
Params size (MB): 95.21
Estimated Total Size (MB): 151.60
----------------------------------------------------------------

Process finished with exit code 0

如果需要训练模板,可以在下面的浩浩的科研笔记中的付费资料购买,赠送所有一维神经网络模型的经典代码,可以在模板中随意切换。

评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

浩浩的科研笔记

这我为您答疑发送资源的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值