ResNet 深入理解

ResNet

问题一:在反向传播过程中梯度 x > 1 梯度爆炸,梯度x < 1 梯度消失

解决方案

1.权重初始化

2.数据标准化bn

3.batch norm

问题二:累加Conv后,并不是网络越深,效果越好

解决方案

1.残差结构

残差结构

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0EXFybGc-1631675427684)(../images/image-20210913205239249.png)]

1.左侧的残差结构适用于浅层网络,ResNet34

2.右侧的残差结构适用于深层网络,ResNet50/101等

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RxCoU6ZJ-1631675427687)(../images/image-20210913210709276.png)]

下采样残差结构

针对ResNet34

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TmiZsAKZ-1631675427689)(../images/image-20210913211120996.png)]

针对ResNet50

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JslTh9bt-1631675427691)(../images/image-20210913211225858.png)]

batch normalization

在这里插入图片描述

目的:使得feaure map 满足均值为0,方差为1的分布规律

作用:加速网络收敛,提高准确率

batch normalization是对一批数据的每一个维度进行计算标准化处理

以下假设:batch为2,双通道进行batch normalization

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-sO9YXj4a-1631675427694)(../images/image-20210913212348408.png)]

使用batch normalization时的注意

1.训练时,将training设置为True,验证时设置为False,因为在训练过程中会不断的进行统计。可以通过model.train()和model.eval进行控制

2.尽可能将batch设置的大一点,使其接近于整个训练集的均值和方差,当batch为1时,bn没有任何作用

3.将bn层放到conv层和relu层之间,且conv不要设置bias,即使设置了也起不到任何作用,还会使得参数变大

ResNet实现

import torch
from torch import nn


class BasicBlock(nn.Module):
    expansion = 1  # 一个stage中第三层卷积扩大倍数

    def __init__(self, in_channel, out_channel, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        # 当stride == 1时对应常规残差结构,相反stride==2时,下采样
        # stride == 1, output = (input - 3 + 2* 1) / 1 + 1 = input
        # stride == 2, output = (input - 3 + 2* 1) / 2 + 1 == input/2
        self.conv1 = nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU(inplace=True)

        self.conv2 = nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)

        # downsample 即每一个stage第一个卷积层降为的层
        self.downsample = downsample

    def forward(self, x):
        residual = x
        if self.downsample is not None:
            residual = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out += residual

        return self.relu(out)


class BottelNeck(nn.Module):
    expansion = 4

    def __init__(self, in_channel, out_channel, stride=1, downsample=None):
        super(BottelNeck, self).__init__()
        self.conv1 = nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channel)
        self.relu = nn.ReLU(inplace=True)

        self.conv2 = nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channel)

        self.conv3 = nn.Conv2d(out_channel, out_channel * self.expansion, kernel_size=1, stride=1, bias=False)
        self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
        self.downsample = downsample  # None 代表没有进行下采样,not none表示要进行下采样

    def forward(self, x):
        residual = x
        if self.downsample is not None:
            residual = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += residual

        return self.relu(out)


class ResNet(nn.Module):

    def __init__(self, block, block_sums, num_classes=1000, include_top=True):
        super(ResNet, self).__init__()

        self.include_top = include_top
        self.in_channel = 64

        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2, padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)  # ?

        self.c2 = self.make_layer(block, 64, block_sums[0])
        self.c3 = self.make_layer(block, 128, block_sums[1], stride=2)  # stride=2 需要下采样
        self.c4 = self.make_layer(block, 256, block_sums[2], stride=2)
        self.c5 = self.make_layer(block, 512, block_sums[3], stride=2)

        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            self.fc = nn.Linear(512 * block.expansion, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')


    def make_layer(self, block, channel, block_sum, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != block.expansion * self.in_channel:
            # stride == 1时,只会改变通道数,这种情况针对resnet50
            # 而resnet 34 不会满足改条件 self.in_channel != block.expansion * self.in_channel
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.ReLU(channel * block.expansion)
            )

        layers = []
        layers.append(block(self.in_channel, channel, stride, downsample))
        self.in_channel = block.expansion * channel
        for _ in range(1, block_sum):
            layers.append(block(self.in_channel, channel))

        return nn.Sequential(*layers)

    def forward(self, x):
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.c2(out)
        out = self.c3(out)
        out = self.c4(out)
        out = self.c5(out)

        if self.include_top:
            out = self.avgpool(out)
            out = torch.flatten(out)
            out = self.fc(out)

        return out


def resNet34():
    return ResNet(BasicBlock, [3, 4, 6, 3])


def resNet101():
    return ResNet(BottelNeck, [3, 4, 23, 3])

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值