现代卷积网络实战系列6:PyTorch从零构建ResNet训练MNIST数据集

🌈🌈🌈现代卷积网络实战系列 总目录

本篇文章的代码运行界面均在Pycharm中进行
本篇文章配套的代码资源已经上传

1、MNIST数据集处理、加载、网络初始化、测试函数
2、训练函数、PyTorch构建LeNet网络
3、PyTorch从零构建AlexNet训练MNIST数据集
4、PyTorch从零构建VGGNet训练MNIST数据集
5、PyTorch从零构建GoogLeNet训练MNIST数据集
6、PyTorch从零构建ResNet训练MNIST数据集

16、ResNet

2015 年何恺明等人提出 ResNet模型,在 ImageNet 的图像分类比赛中获得了冠军,其网络深度高达 152 层。从 ResNet 模型与 VGGNet 模型的对比可以看出,更深层次的结构可以提高网络提取特征的能力和分类性能,但这并不意味着网络可以无限加深。当网络深度达到一定程度后,继续加深网络会导致网络性能退化。为了解决深层网络性能退化问题,在网络中引入残差机制构建了残差块结构,使浅层网络可以通过残差边直接传输到深层网络。残差块结构如图所示:

在这里插入图片描述
ResNet 的核心在于残差模块中的恒等映射结构,该结构既包含了主路竖向连接,又包含支路横向连接。当网络层数过多导致主路竖向连接出现梯度消失问题时,支路横向连接可以保证梯度信息的有效性,防止网络层数过深导致的梯度消失和性能退化。

17、网络架构

ResNet(
 (conv1): Conv2d(1, 64, kernel_size=(1, 1), stride=(1, 1))
 (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
 (resblock1): Residual(
  (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock2): Residual(
  (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock3): Residual(
  (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2))
  (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock4): Residual(
  (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock5): Residual(
  (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock6): Residual(
  (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock7): Residual(
  (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock8): Residual(
  (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
 (fc): Linear(in_features=512, out_features=10, bias=True)
 )

18、PyTorch构建残差块Residual

class Residual(nn.Module):
    def __init__(self, in_channel, out_channel, stride, upsamlpe):
        super(Residual, self).__init__()
        self.conv1 = nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1)
        self.conv2 = nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride)

        self.bn1 = nn.BatchNorm2d(out_channel, affine=False)
        self.bn2 = nn.BatchNorm2d(out_channel, affine=False)

        self.relu1 = nn.ReLU()
        self.relu2 = nn.ReLU()

    def forward(self, x):
        out = self.relu1(self.bn1(self.conv1(x)))

        out = self.bn2(self.conv2(out))

        x = self.conv3(x)
        out = self.relu2(out + x)

        # print(out.shape)

        return out

19、PyTorch构建ResNet

class ResNet(nn.Module):
    def __init__(self, num_classes):
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(1, 64, kernel_size=1)
        self.maxpool1 = nn.MaxPool2d(3, stride=2, padding=1)

        self.resblock1 = Residual(64, 64, 1, True)
        self.resblock2 = Residual(64, 64, 1, True)
        self.resblock3 = Residual(64, 128, 2, True)
        self.resblock4 = Residual(128, 128, 1, True)
        self.resblock5 = Residual(128, 256, 1, True)
        self.resblock6 = Residual(256, 256, 1, True)
        self.resblock7 = Residual(256, 512, 1, True)
        self.resblock8 = Residual(512, 512, 1, True)
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(512, num_classes)

    def forward(self, x):
        x = self.maxpool1(self.conv1(x))
        x = self.resblock1(x)
        x = self.resblock2(x)
        x = self.resblock3(x)
        x = self.resblock4(x)
        x = self.resblock5(x)
        x = self.resblock6(x)
        x = self.resblock7(x)
        x = self.resblock8(x)

        x = self.avgpool(x)
        x = x.reshape(x.shape[0], -1)
        x = self.fc(x)

        return x

D:\conda\envs\pytorch\python.exe A:\0_MNIST\train.py =

Reading data… train_data: (60000, 28, 28) train_label (60000,)
test_data: (10000, 28, 28) test_label (10000,)

Initialize neural network
test loss 2302.5
test accuracy 10.1%

epoch step 1
training time 23.9s
training loss 177.2
test loss 86.3
test accuracy 97.3%

epoch step 2
training time 30.2s
training loss 80.1
test loss 122.0
test accuracy 96.5%

epoch step 3
training time 18.8s
training loss 63.4
test loss 69.1
test accuracy 97.9%

Training finished
3 epoch training time 72.9s
One epoch average training time 24.3s

进程已结束,退出代码为 0

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

机器学习杨卓越

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值