现代卷积网络实战系列6:PyTorch从零构建ResNet训练MNIST数据集

🌈🌈🌈现代卷积网络实战系列 总目录

本篇文章的代码运行界面均在Pycharm中进行
本篇文章配套的代码资源已经上传

1、MNIST数据集处理、加载、网络初始化、测试函数
2、训练函数、PyTorch构建LeNet网络
3、PyTorch从零构建AlexNet训练MNIST数据集
4、PyTorch从零构建VGGNet训练MNIST数据集
5、PyTorch从零构建GoogLeNet训练MNIST数据集
6、PyTorch从零构建ResNet训练MNIST数据集

16、ResNet

2015 年何恺明等人提出 ResNet模型,在 ImageNet 的图像分类比赛中获得了冠军,其网络深度高达 152 层。从 ResNet 模型与 VGGNet 模型的对比可以看出,更深层次的结构可以提高网络提取特征的能力和分类性能,但这并不意味着网络可以无限加深。当网络深度达到一定程度后,继续加深网络会导致网络性能退化。为了解决深层网络性能退化问题,在网络中引入残差机制构建了残差块结构,使浅层网络可以通过残差边直接传输到深层网络。残差块结构如图所示:

在这里插入图片描述
ResNet 的核心在于残差模块中的恒等映射结构,该结构既包含了主路竖向连接,又包含支路横向连接。当网络层数过多导致主路竖向连接出现梯度消失问题时,支路横向连接可以保证梯度信息的有效性,防止网络层数过深导致的梯度消失和性能退化。

17、网络架构

ResNet(
 (conv1): Conv2d(1, 64, kernel_size=(1, 1), stride=(1, 1))
 (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
 (resblock1): Residual(
  (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock2): Residual(
  (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock3): Residual(
  (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2))
  (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock4): Residual(
  (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock5): Residual(
  (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock6): Residual(
  (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock7): Residual(
  (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (resblock8): Residual(
  (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
  (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
  (relu1): ReLU()
  (relu2): ReLU()
 )
 (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
 (fc): Linear(in_features=512, out_features=10, bias=True)
 )

18、PyTorch构建残差块Residual

class Residual(nn.Module):
    def __init__(self, in_channel, out_channel, stride, upsamlpe):
        super(Residual, self).__init__()
        self.conv1 = nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1)
        self.conv2 = nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride)

        self.bn1 = nn.BatchNorm2d(out_channel, affine=False)
        self.bn2 = nn.BatchNorm2d(out_channel, affine=False)

        self.relu1 = nn.ReLU()
        self.relu2 = nn.ReLU()

    def forward(self, x):
        out = self.relu1(self.bn1(self.conv1(x)))

        out = self.bn2(self.conv2(out))

        x = self.conv3(x)
        out = self.relu2(out + x)

        # print(out.shape)

        return out

19、PyTorch构建ResNet

class ResNet(nn.Module):
    def __init__(self, num_classes):
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(1, 64, kernel_size=1)
        self.maxpool1 = nn.MaxPool2d(3, stride=2, padding=1)

        self.resblock1 = Residual(64, 64, 1, True)
        self.resblock2 = Residual(64, 64, 1, True)
        self.resblock3 = Residual(64, 128, 2, True)
        self.resblock4 = Residual(128, 128, 1, True)
        self.resblock5 = Residual(128, 256, 1, True)
        self.resblock6 = Residual(256, 256, 1, True)
        self.resblock7 = Residual(256, 512, 1, True)
        self.resblock8 = Residual(512, 512, 1, True)
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(512, num_classes)

    def forward(self, x):
        x = self.maxpool1(self.conv1(x))
        x = self.resblock1(x)
        x = self.resblock2(x)
        x = self.resblock3(x)
        x = self.resblock4(x)
        x = self.resblock5(x)
        x = self.resblock6(x)
        x = self.resblock7(x)
        x = self.resblock8(x)

        x = self.avgpool(x)
        x = x.reshape(x.shape[0], -1)
        x = self.fc(x)

        return x

D:\conda\envs\pytorch\python.exe A:\0_MNIST\train.py =

Reading data… train_data: (60000, 28, 28) train_label (60000,)
test_data: (10000, 28, 28) test_label (10000,)

Initialize neural network
test loss 2302.5
test accuracy 10.1%

epoch step 1
training time 23.9s
training loss 177.2
test loss 86.3
test accuracy 97.3%

epoch step 2
training time 30.2s
training loss 80.1
test loss 122.0
test accuracy 96.5%

epoch step 3
training time 18.8s
training loss 63.4
test loss 69.1
test accuracy 97.9%

Training finished
3 epoch training time 72.9s
One epoch average training time 24.3s

进程已结束,退出代码为 0

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
要使用ResNet-18训练MNIST数据集,首先需要导入相应的库,并获取MNIST数据集。然后,需要调整ResNet-18模型的全连接层的输出,以适应MNIST数据集的10个类别。接下来,可以开启训练过程,并在训练过程中显示损失值。训练完成后,可以保存训练好的模型权重文件。最后,在测试集上测试训练后模型的准确率。 下面是一种可能的实现: ```python import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, models, transforms # 导入MNIST数据集 train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor(), download=True) # 创建ResNet-18模型 model = models.resnet18(pretrained=True) in_features = model.fc.in_features model.fc = nn.Linear(in_features, 10) # 设置优化器和损失函数 optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) criterion = nn.CrossEntropyLoss() # 开始训练 num_epochs = 10 batch_size = 64 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) for epoch in range(num_epochs): running_loss = 0.0 for images, labels in train_loader: optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() print("Epoch {} - Training loss: {:.4f}".format(epoch+1, running_loss / len(train_loader))) # 保存训练好的模型权重文件 torch.save(model.state_dict(), 'resnet18_mnist.pth') # 在测试集上测试模型准确率 test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False) correct = 0 total = 0 with torch.no_grad(): for images, labels in test_loader: outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total print("Test accuracy: {:.2f}%".format(accuracy)) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

机器学习杨卓越

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值