PyTorch之inception结构

一、实现过程

inception模块在GoogLeNet中首次提出并采用,其基本结构如图1,整个inception结构就是由多个这样的inception模块串联起来的。inception结构的主要贡献有两个:一是使用1x1的卷积来进行升降维;二是在多个尺寸上同时进行卷积再聚合。本文利用图1的inception结构实现MNIST数据集的多分类。
在这里插入图片描述

图1 inception基本结构

将inception结构封装成类,减少代码冗余。代码如下:

class InceptionA(torch.nn.Module):
    def __init__(self, in_channels):
        super(InceptionA,self).__init__()
        self.branch1x1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)
        
        self.branch5x5_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)
        self.branch5x5_2 = torch.nn.Conv2d(16,24,kernel_size=5,padding=2)
    
        self.branch3x3_1 = torch.nn.Conv2d(in_channels,16,kernel_size=1)
        self.branch3x3_2 = torch.nn.Conv2d(16,24,kernel_size=3,padding=1)
        self.branch3x3_3 = torch.nn.Conv2d(24,24,kernel_size=3,padding=1)
        
        self.branch_pool = torch.nn.Conv2d(in_channels,24,kernel_size=1)
        
    def forward(self,x):
        branch1x1 = self.branch1x1(x)
        
        branch5x5 = self.branch5x5_1(x)
        branch5x5 = self.branch5x5_2(branch5x5)
        
        branch3x3 = self.branch3x3_1(x)
        branch3x3 = self.branch3x3_2(branch3x3)
        branch3x3 = self.branch3x3_3(branch3x3)
        
        branch_pool = F.avg_pool2d(x,kernel_size=3,stride=1,padding=1)
        branch_pool = self.branch_pool(branch_pool)
        
        outputs = [branch1x1,branch5x5,branch3x3,branch_pool]
        return torch.cat(outputs,dim=1)

网络部分代码更改为:

# 2.设计模型
class Net(torch.nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        self.conv1 = torch.nn.Conv2d(1,10,kernel_size=5)
        self.conv2 = torch.nn.Conv2d(88,20,kernel_size=5)
        
        self.incep1 = InceptionA(in_channels=10)
        self.incep2 = InceptionA(in_channels=20)
        
        self.mp = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(1408,10)
        
    def forward(self,x):
        # Flatten data from (n,1,28,28) to (n,784)
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incep1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incep2(x)
        x = x.view(in_size,-1)  # flatten
        return self.fc(x)
model = Net()

其余代码不变。
运行结果为:

[1,300] loss: 0.788
[1,600] loss: 0.225
[1,900] loss: 0.155
Accuracy on test set: 97.02 % [9702/10000]
[2,300] loss: 0.115
[2,600] loss: 0.102
[2,900] loss: 0.087
Accuracy on test set: 97.97 % [9797/10000]
[3,300] loss: 0.078
[3,600] loss: 0.073
[3,900] loss: 0.069
Accuracy on test set: 98.35 % [9835/10000]
[4,300] loss: 0.061
[4,600] loss: 0.061
[4,900] loss: 0.060
Accuracy on test set: 98.56 % [9856/10000]
[5,300] loss: 0.053
[5,600] loss: 0.051
[5,900] loss: 0.047
Accuracy on test set: 98.61 % [9861/10000]
[6,300] loss: 0.041
[6,600] loss: 0.046
[6,900] loss: 0.048
Accuracy on test set: 98.85 % [9885/10000]
[7,300] loss: 0.041
[7,600] loss: 0.039
[7,900] loss: 0.041
Accuracy on test set: 98.56 % [9856/10000]
[8,300] loss: 0.034
[8,600] loss: 0.038
[8,900] loss: 0.039
Accuracy on test set: 98.78 % [9878/10000]
[9,300] loss: 0.036
[9,600] loss: 0.031
[9,900] loss: 0.035
Accuracy on test set: 98.87 % [9887/10000]
[10,300] loss: 0.030
[10,600] loss: 0.033
[10,900] loss: 0.032
Accuracy on test set: 98.94 % [9894/10000]

在这里插入图片描述
补充:
经过卷积后的高度(宽度)可由以下公式计算: H ′ = H − F + 2 p s + 1 (1) H'=\frac{H-F+2p}{s}+1\tag{1} H=sHF+2p+1(1)其中, F F F为卷积核大小(kernel_size), p p p为卷积填充的圈数(padding), s s s为卷积步长(stride)。

二、参考文献

[1] https://www.bilibili.com/video/BV1Y7411d7Ys?p=11
[2] https://baike.baidu.com/item/GoogLeNet/22689587?fr=aladdin

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

心️升明月

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值