卷积神经网络(高级篇)

以GoogLeNet为例

1. Inception Module

1.1 用Inception Module解决卷积核的选择问题

常用的卷积核有135等奇数长度的正方形。因此,我们可以同时使用三种卷积核,让网络智能地选择高准确率的卷积核(即增加权重)
在这里插入图片描述

我们通常将大小为1的卷积核,用于融合张量多通道的信息,来节约计算成本。

import torch
import torch.nn.functional as F

class InceptionModule(torch.nn.Module):
    def __init__(self, in_channels):
        super(InceptionModule, self).__init__()
        self.branch_pool = torch.nn.Conv2d(in_channels, 24, kernel_size=1)

        self.branch1x1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)

        self.branch5x5_1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_2 = torch.nn.Conv2d(16, 24, kernel_size=5, padding=2)

        self.branch3x3_1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3x3_2 = torch.nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3x3_3 = torch.nn.Conv2d(24, 24, kernel_size=3, padding=1)
    def forward(self, x):
        y1 = F.avg_pool2d(x, kernel_size=3, stride=3, padding=1)
        y1 = self.branch_pool(y1)

        y2 = self.branch1x1(x)

        y3 = self.branch5x5_1(x)
        y3 = self.branch5x5_2(y3)

        y4 = self.branch3x3_1(x)
        y4 = self.branch3x3_2(y4)
        y4 = self.branch3x3_3(y4)

        y = [y1, y2, y3 ,y4]
        return torch.cat(y, dim=1)

1.2 在卷积网络中使用Inception Module

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(88, 20, kernel_size=5)

        self.incept1 = InceptionModule(in_channels=10)
        self.incept2 = InceptionModule(in_channels=20)

        self.mp = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(1408, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incept1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incept2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x

  File "<ipython-input-1-89194c216ded>", line 2
    def
        ^
SyntaxError: invalid syntax

2. 残差神经网络

2.1 Residual Network 残差网络

当网络层数过深时,任何网络都不可避免的会出现“梯度消失”的问题

  • 梯度消失:由于网络的权重总是在0至1之间,当网络的深度过大时,使用链式法则求解的梯度会趋近于0。这使得网络的优化函数 w = w 0 − α ⋅ g r a d w=w_0-\alpha·grad w=w0αgrad几乎不会使网络的权重发生变化。

为了解决梯度消失的问题,我们引入残差网络,即试图之间将网络某一个节点的张量传递给后面某一层,这使得后面该层的权重的导数总是大于1。
在这里插入图片描述

class ResidualBlock(torch.nn.Module):
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.channels = channels
        self.conv1 = torch.nn.Conv2d(channels, channels, kernel_size=3, padding=1)
        self.conv2 = torch.nn.Conv2d(channels, channels, kernel_size=3, padding=1)

    def forward(self, x):
        y = F.relu(self.conv1(x))
        y = self.conv2(y)
        return F.relu(x+y)

2.2 使用残差神经网络

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 16, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(16, 32, kernel_size=5)
        self.mp = torch.nn.MaxPool2d(2)

        self.rblock1 = ResidualBlock(16)
        self.rblock2 = ResidualBlock(32)

        self.fc = torch.nn.Linear(512, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = self.mp(F.relu(self.conv1(x)))
        x = self.rblock1(x)
        x = self.mp(F.relu(self.conv2(x)))
        x = self.rblock2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值