【torch版-卷积神经网络(Convolutional Neural Networks)实现Mnist手写数字分类】

本文展示了使用PyTorch实现LeNet,AlexNet,InceptionNet和ResNet四种经典卷积神经网络模型,并在MNIST手写数字数据集上进行训练的过程。随着训练的进行,模型的测试错误率逐渐降低,表明模型在识别手写数字任务上性能提升。
摘要由CSDN通过智能技术生成

【torch版-卷积神经网络(Convolutional Neural Networks)实现Mnist手写数字分类】

以下代码由Jupyter Notebook实现

1. 载入数据

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision

size = 28
num_classes = 10
batch_size = 32
learning_rate = 0.005
num_epochs = 50

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)

train_dataset = torchvision.datasets.MNIST(root = 'data',
                                            train = True,
                                            transform= torchvision.transforms.ToTensor(),
                                            download= True)
train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
                                            batch_size = batch_size,
                                            shuffle = True)
test_dataset = torchvision.datasets.MNIST(root = 'data',
                                            train = False,
                                            transform= torchvision.transforms.ToTensor(),
                                            download= True)
test_loader = torch.utils.data.DataLoader(dataset = test_dataset,
                                            batch_size = batch_size,
                                            shuffle = True)                                            
print(len(train_loader),len(test_loader))
cuda:0
1875 313

2.定义LeNet

class LeNet(nn.Module):
    '''这是一个使用PyTorch编写的LeNet模型的初始化函数。
    LeNet是一种经典的卷积神经网络, 由Yann LeCun等人在1998年提出。
    它包含了两个卷积层和三个全连接层, 用于对图像进行分类。
    该模型的初始化函数包含了模型中需要使用的各种网络层的定义。
    其中, super(LeNet, self).init()用于调用父类的初始化函数,
    nn.Module是所有神经网络模型的基类'''
    def __init__(self):
        super(LeNet, self).__init__()
        self.conv1 = nn.Conv2d(1,6,3)   # 这段代码是一个PyTorch中的卷积层定义,创建了一个输入通道为1,输出通道为6,卷积核大小为3x3的卷积层。input:[1,28,28]
        self.pool1 = nn.MaxPool2d(2,2)
        self.conv2 = nn.Conv2d(6,16,3)
        self.pool2 = nn.MaxPool2d(2,2)
        self.fc3 = nn.Linear(16*5*5, 120)
        self.fc4 = nn.Linear(120, 84)
        self.fc5 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = torch.relu(x)
        x = self.pool1(x)

        x = self.conv2(x)
        x = torch.relu(x)
        x = self.pool2(x)

        x = x.view(x.size(0), -1)   # 这是一个用于调整张量形状的函数,将一个多维张量x的维度从[batch_size, channel, height, width]调整为[batch_size, -1]。其中-1表示该位置的尺寸由计算机自动推导得出,以保证张量的大小不变。通常,这个函数用于将卷积层的输出转换成全连接层的输入,以进行分类或回归等任务。
        x = self.fc3(x)
        x = torch.relu(x)
        x = self.fc4(x)
        x = torch.relu(x)
        x = self.fc5(x)

        return x

3.测试LeNet

from torchsummary import summary
model = LeNet().to(device)
summary(model, (1,28,28))

x = torch.randn(1,1,28,28).to(device)
out = model(x)
print(out, out.shape)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1            [-1, 6, 26, 26]              60
         MaxPool2d-2            [-1, 6, 13, 13]               0
            Conv2d-3           [-1, 16, 11, 11]             880
         MaxPool2d-4             [-1, 16, 5, 5]               0
            Linear-5                  [-1, 120]          48,120
            Linear-6                   [-1, 84]          10,164
            Linear-7                   [-1, 10]             850
================================================================
Total params: 60,074
Trainable params: 60,074
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.06
Params size (MB): 0.23
Estimated Total Size (MB): 0.29
----------------------------------------------------------------
tensor([[ 0.0470, -0.1018, -0.0517, -0.0125,  0.0844,  0.1247,  0.0207,  0.1338,
         -0.0425, -0.0684]], device='cuda:0', grad_fn=<AddmmBackward0>) torch.Size([1, 10])

4.训练函数

def train(model, num_epochs, optimizer, save_name, device='cpu'):   # 这里默认为CPU,有默认参数的变量放后面避免调用出错
    criterion = nn.CrossEntropyLoss()

    for epoch in range(num_epochs):
        # train
        model.train()
        train_loss = 0.
        for x,y in train_loader:
            x = x.to(device)
            y = y.to(device)
            outputs = model(x)
            loss = criterion(outputs, y)
            train_loss += loss.item()
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
        print('Epoch: %3d/%d, 训练损失:%.6f,' %(epoch+1, num_epochs,
                                            train_loss/len(train_loader.dataset)*batch_size), end=' ')
        
        # test 
        model.eval()    #关闭bn和dropout
        with torch.no_grad(): # 不需要计算梯度了
            test_loss = 0.
            error = 0.
            for x,y in test_loader:
                x = x.to(device)
                y = y.to(device)
                outputs = model(x)
                loss = criterion(outputs, y)
                test_loss += loss.item()
                pred = torch.argmax(outputs, axis=1)
                error += torch.sum((pred!=y).float()).item()
            test_loss /= len(test_loader.dataset)
            error /= len(test_loader.dataset)
            print('测试损失:%.6f, 测试错误率:%.2f%%' %(test_loss, error*100))
        torch.save(model, save_name)

5.开始LeNet训练

model2 = LeNet().to(device)
optimizer = torch.optim.SGD(model2.parameters(), lr = learning_rate)
train(model2, num_epochs, optimizer, 'Lenet.pth', device)
Epoch:   1/50, 训练损失:2.300986, 测试损失:0.071927, 测试错误率:88.65%
Epoch:   2/50, 训练损失:2.294205, 测试损失:0.071585, 测试错误率:88.65%
Epoch:   3/50, 训练损失:2.189130, 测试损失:0.048439, 测试错误率:41.12%
Epoch:   4/50, 训练损失:0.603036, 测试损失:0.010060, 测试错误率:9.44%
Epoch:   5/50, 训练损失:0.295670, 测试损失:0.007042, 测试错误率:6.55%
...
Epoch:  46/50, 训练损失:0.017435, 测试损失:0.001094, 测试错误率:1.08%
Epoch:  47/50, 训练损失:0.017018, 测试损失:0.001171, 测试错误率:1.22%
Epoch:  48/50, 训练损失:0.016646, 测试损失:0.001084, 测试错误率:1.03%
Epoch:  49/50, 训练损失:0.015562, 测试损失:0.001299, 测试错误率:1.27%
Epoch:  50/50, 训练损失:0.014892, 测试损失:0.001314, 测试错误率:1.30%

6. 定义AlexNet网络

class AlexNet(nn.Module):
    '''AlexNet是一个用于计算机视觉任务的卷积神经网络模型。它由谷歌科学家Alex Krizhevsky、Ilya Sutskever和Geoff Hinton在2012年提出,是在ImageNet图像识别比赛中获胜的卷积神经网络之一。
    AlexNet有5个卷积层和3个全连接层。它以超过以前的方法为基础,例如使用ReLU激活函数、使用Dropout regularization等方法。此外,AlexNet还使用了GPU训练,这是当时一个新的技术,在AlexNet之前没有被广泛利用。
    AlexNet的输入是一张224x224的彩色图像,输出是对该图像的分类。在ImageNet数据集上,AlexNet的错误率比早期方法低15%以上。AlexNet也成为了深度学习的里程碑,在深度学习的发展历程中具有重要意义。
    参考: http://t.csdn.cn/nSl8r'''
    def __init__(self, out_size = 10, init_weights = False):
        super(AlexNet,self).__init__()
        # 用nn.Sequential()将网络打包成一个模块,精简代码
        self.features = nn.Sequential(  # 卷积层提取图像特征
            nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),   # input[1, 28, 28]
            nn.ReLU(inplace=True),      #产生的计算结果不会有影响。利用in-place计算可以节省内(显)存,同时还可以省去反复申请和释放内存的时间。但是会对原变量覆盖,只要不带来错误就用。
            nn.MaxPool2d(kernel_size=2, stride=2), 
            
            nn.Conv2d(16, 32, kernel_size=5, padding=2),           
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),

            nn.Conv2d(32, 64, kernel_size=5, padding=2),         
            nn.ReLU(inplace=True),

            nn.Conv2d(64, 128, kernel_size=5, padding=2),         
            nn.ReLU(inplace=True),

            nn.Conv2d(128, 128, kernel_size=5, padding=2),        
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),               
        )
        
        self.classifier = nn.Sequential(
            nn.Dropout(p=0.5),
            nn.Linear(128*3*3,1152),
            nn.ReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(1152, 1152),
            nn.ReLU(inplace=True),
            nn.Linear(1152, out_size),
        )

        if init_weights:         # pytorch中对卷积和全连接层自动进行kaiming初始化
            self._initialize_weights()
        
    def forward(self, x):
        x = self.features(x)                        # 卷积层提取特征
        x = torch.flatten(x, start_dim=1)           # pytorch中tensor通常的排列顺序:[batch,channel,height,width]
        x = self.classifier(x)                      # 全连接层分类
        return x
        
    # 网络权重初始化,实际上 pytorch 在构建网络时会自动初始化权重
    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):    # 若是卷积层
                '''该条件检查变量“m”是否是PyTorch类“nn”的实例。Conv2d”。
                如果“m”确实是这个类的一个实例,那么条件将评估为真,
                并且If语句中的代码将被执行。如果“m”不是这个类的实例,
                那么条件将评估为假,并且If语句中的代码将被跳过。'''
                nn.init.kaiming_normal_(m.weight, mode='fan_out',
                                         nonlinearity='relu')
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):  # 若是全连接层
                nn.init.normal_(m.weight, 0, 0.01)    # 正态分布
                nn.init.constant_(m.bias, 0)             

7.测试AlexNet

from torchsummary import summary
model = AlexNet().to(device)
summary(model, (1,28,28))

x = torch.randn(1,1,28,28).to(device)
out = model(x)
print(out, out.shape)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 16, 28, 28]             416
              ReLU-2           [-1, 16, 28, 28]               0
         MaxPool2d-3           [-1, 16, 14, 14]               0
            Conv2d-4           [-1, 32, 14, 14]          12,832
              ReLU-5           [-1, 32, 14, 14]               0
         MaxPool2d-6             [-1, 32, 7, 7]               0
            Conv2d-7             [-1, 64, 7, 7]          51,264
              ReLU-8             [-1, 64, 7, 7]               0
            Conv2d-9            [-1, 128, 7, 7]         204,928
             ReLU-10            [-1, 128, 7, 7]               0
           Conv2d-11            [-1, 128, 7, 7]         409,728
             ReLU-12            [-1, 128, 7, 7]               0
        MaxPool2d-13            [-1, 128, 3, 3]               0
          Dropout-14                 [-1, 1152]               0
           Linear-15                 [-1, 1152]       1,328,256
             ReLU-16                 [-1, 1152]               0
          Dropout-17                 [-1, 1152]               0
           Linear-18                 [-1, 1152]       1,328,256
             ReLU-19                 [-1, 1152]               0
           Linear-20                   [-1, 10]          11,530
================================================================
Total params: 3,347,210
Trainable params: 3,347,210
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.62
Params size (MB): 12.77
Estimated Total Size (MB): 13.40
----------------------------------------------------------------
tensor([[ 0.0068,  0.0046,  0.0296, -0.0015, -0.0043, -0.0235,  0.0388,  0.0007,
          0.0145, -0.0453]], device='cuda:0', grad_fn=<AddmmBackward0>) torch.Size([1, 10])

8.训练AlexNet

model3 = AlexNet(out_size = 10,init_weights=True).to(device)
print(model3)
optimizer = torch.optim.SGD(model3.parameters(), lr = learning_rate)
train(model3, num_epochs, optimizer, 'AlexNet.pth', device)
Epoch:   1/50, 训练损失:2.301477, 测试损失:0.071999, 测试错误率:88.65%
Epoch:   2/50, 训练损失:2.299148, 测试损失:0.071852, 测试错误率:86.17%
Epoch:   3/50, 训练损失:2.273426, 测试损失:0.069350, 测试错误率:80.80%
Epoch:   4/50, 训练损失:1.308875, 测试损失:0.010009, 测试错误率:9.43%
Epoch:   5/50, 训练损失:0.289133, 测试损失:0.004225, 测试错误率:4.40%
...
Epoch:  46/50, 训练损失:0.015937, 测试损失:0.000644, 测试错误率:0.70%
Epoch:  47/50, 训练损失:0.014567, 测试损失:0.000742, 测试错误率:0.69%
Epoch:  48/50, 训练损失:0.014157, 测试损失:0.000673, 测试错误率:0.64%
Epoch:  49/50, 训练损失:0.013786, 测试损失:0.000747, 测试错误率:0.82%
Epoch:  50/50, 训练损失:0.013212, 测试损失:0.000781, 测试错误率:0.73%

9.定义InceptionNet网络

class InceptionA(nn.Module):
    def __init__(self):
        super(InceptionA, self).__init__()
        self.conv11 = nn.Conv2d(12,6,1)
        self.bn11 = nn.BatchNorm2d(6)
        self.conv31 = nn.Conv2d(6,6,3,padding=1,stride=2)
        self.bn31 = nn.BatchNorm2d(6)
        self.conv12 = nn.Conv2d(12,8,1)
        self.bn12 = nn.BatchNorm2d(8)
        self.conv32 = nn.Conv2d(8,8,3,padding=1)
        self.bn32 = nn.BatchNorm2d(8)
        self.conv33 = nn.Conv2d(8,8,3,padding=1,stride=2)
        self.bn33 = nn.BatchNorm2d(8)
        self.pool = nn.MaxPool2d(2,2)
        self.conv13 = nn.Conv2d(12,4,1)
        self.bn13 = nn.BatchNorm2d(4)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        '''x: b*12*14*14
        out:b*18*7*7'''
        out1 = self.conv11(x)
        out1 = self.bn11(out1)
        out1 = self.relu(out1)
        out1 = self.conv31(out1) #(b,6,7,7)
        out1 = self.bn31(out1)
        out1 = self.relu(out1)

        out2 = self.conv12(x)
        out2 = self.bn12(out2)
        out2 = self.relu(out2)
        out2 = self.conv32(out2)
        out2 = self.bn32(out2)
        out2 = self.relu(out2)
        out2 = self.conv33(out2) #8
        out2 = self.bn33(out2)
        out2 = self.relu(out2)
        
        out3 = self.pool(x)
        out3 = self.conv13(out3) #4
        out3 = self.bn13(out3)
        out3 = torch.relu(out3)
        
        return torch.cat([out1,out2,out3], 1) #(b,18,7,7)行拼接
    
class InceptionB(nn.Module):

    def __init__(self):
        super(InceptionB, self).__init__()
        self.conv11 = nn.Conv2d(18,6,1)
        self.bn11 = nn.BatchNorm2d(6)
        self.conv12 = nn.Conv2d(18,8,1)
        self.bn12 = nn.BatchNorm2d(8)
        self.conv31 = nn.Conv2d(8,8,3,padding=1)
        self.bn31 = nn.BatchNorm2d(8)
        self.conv13 = nn.Conv2d(18,8,1)
        self.bn13 = nn.BatchNorm2d(8)
        self.conv32 = nn.Conv2d(8,8,3,padding=1)
        self.bn32 = nn.BatchNorm2d(8)
        self.conv33 = nn.Conv2d(8,8,3,padding=1)
        self.bn33 = nn.BatchNorm2d(8)
        self.pool = nn.MaxPool2d(3,1,padding=1)
        self.conv14 = nn.Conv2d(18,4,1)
        self.bn14 = nn.BatchNorm2d(4)
        self.relu = nn.ReLU(inplace=True)
        
    def forward(self, x):
        '''x: b*18*7*7
        out:b*26*7*7'''
        out1 = self.conv11(x)
        out1 = self.bn11(out1)
        out1 = self.relu(out1)
        
        out2 = self.conv12(x)
        out2 = self.bn12(out2)
        out2 = self.relu(out2)
        out2 = self.conv31(out2)
        out2 = self.bn31(out2)
        out2 = self.relu(out2)
        
        out3 = self.conv13(x)
        out3 = self.bn13(out3)
        out3 = self.relu(out3)
        out3 = self.conv32(out3)
        out3 = self.bn32(out3)
        out3 = self.relu(out3)
        out3 = self.conv33(out3)
        out3 = self.bn33(out3)
        out3 = self.relu(out3)
        
        out4 = self.pool(x)
        out4 = self.relu(out4)
        out4 = self.conv14(out4)
        out4 = self.bn14(out4)
        out4 = self.relu(out4)
        
        return torch.cat([out1,out2,out3,out4], 1)

class InceptionNet(nn.Module):

    def __init__(self):
        super(InceptionNet, self).__init__()
        self.conv1 = nn.Conv2d(1,12,3,padding=1)
        self.bn1 = nn.BatchNorm2d(12)
        self.pool = nn.MaxPool2d(2,2)
        self.conv2 = nn.Conv2d(12,12,3,padding=1)
        self.bn2 = nn.BatchNorm2d(12)
        self.inception1 = InceptionA()
        self.inception2 = InceptionB()
        self.conv3 = nn.Conv2d(26,32,3)
        self.bn3 = nn.BatchNorm2d(32)
        self.avg_pool = nn.AvgPool2d(5)
        self.fc = nn.Linear(32, 10)
        self.relu = nn.ReLU(inplace=True)
    
    def forward(self, x):
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        out = self.pool(out)
        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)
        out = self.inception1(out)
        out = self.inception2(out)
        out = self.conv3(out)
        out = self.bn3(out)
        out = self.relu(out)
        out = self.avg_pool(out)
        out = out.view(out.size(0), -1)
        out = self.fc(out)
        out = F.softmax(out, 0)
        return out

10.测试InceptionNet

from torchsummary import summary
model = InceptionNet().to(device)
summary(model, (1,28,28))

x = torch.randn(1,1,28,28).to(device)
out = model(x)
print(out, out.shape)
----------------------------------------------------------------
    Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 12, 28, 28]             120
       BatchNorm2d-2           [-1, 12, 28, 28]              24
              ReLU-3           [-1, 12, 28, 28]               0
         MaxPool2d-4           [-1, 12, 14, 14]               0
            Conv2d-5           [-1, 12, 14, 14]           1,308
       BatchNorm2d-6           [-1, 12, 14, 14]              24
              ReLU-7           [-1, 12, 14, 14]               0
            Conv2d-8            [-1, 6, 14, 14]              78
       BatchNorm2d-9            [-1, 6, 14, 14]              12
             ReLU-10            [-1, 6, 14, 14]               0
           Conv2d-11              [-1, 6, 7, 7]             330
      BatchNorm2d-12              [-1, 6, 7, 7]              12
             ReLU-13              [-1, 6, 7, 7]               0
           Conv2d-14            [-1, 8, 14, 14]             104
      BatchNorm2d-15            [-1, 8, 14, 14]              16
             ReLU-16            [-1, 8, 14, 14]               0
           Conv2d-17            [-1, 8, 14, 14]             584
      BatchNorm2d-18            [-1, 8, 14, 14]              16
             ReLU-19            [-1, 8, 14, 14]               0
           Conv2d-20              [-1, 8, 7, 7]             584
      BatchNorm2d-21              [-1, 8, 7, 7]              16
             ReLU-22              [-1, 8, 7, 7]               0
        MaxPool2d-23             [-1, 12, 7, 7]               0
           Conv2d-24              [-1, 4, 7, 7]              52
      BatchNorm2d-25              [-1, 4, 7, 7]               8
       InceptionA-26             [-1, 18, 7, 7]               0
           Conv2d-27              [-1, 6, 7, 7]             114
      BatchNorm2d-28              [-1, 6, 7, 7]              12
             ReLU-29              [-1, 6, 7, 7]               0
           Conv2d-30              [-1, 8, 7, 7]             152
      BatchNorm2d-31              [-1, 8, 7, 7]              16
             ReLU-32              [-1, 8, 7, 7]               0
           Conv2d-33              [-1, 8, 7, 7]             584
      BatchNorm2d-34              [-1, 8, 7, 7]              16
             ReLU-35              [-1, 8, 7, 7]               0
           Conv2d-36              [-1, 8, 7, 7]             152
      BatchNorm2d-37              [-1, 8, 7, 7]              16
             ReLU-38              [-1, 8, 7, 7]               0
           Conv2d-39              [-1, 8, 7, 7]             584
      BatchNorm2d-40              [-1, 8, 7, 7]              16
             ReLU-41              [-1, 8, 7, 7]               0
           Conv2d-42              [-1, 8, 7, 7]             584
      BatchNorm2d-43              [-1, 8, 7, 7]              16
             ReLU-44              [-1, 8, 7, 7]               0
        MaxPool2d-45             [-1, 18, 7, 7]               0
             ReLU-46             [-1, 18, 7, 7]               0
           Conv2d-47              [-1, 4, 7, 7]              76
      BatchNorm2d-48              [-1, 4, 7, 7]               8
             ReLU-49              [-1, 4, 7, 7]               0
       InceptionB-50             [-1, 26, 7, 7]               0
           Conv2d-51             [-1, 32, 5, 5]           7,520
      BatchNorm2d-52             [-1, 32, 5, 5]              64
             ReLU-53             [-1, 32, 5, 5]               0
        AvgPool2d-54             [-1, 32, 1, 1]               0
           Linear-55                   [-1, 10]             330
================================================================
Total params: 13,548
Trainable params: 13,548
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.51
Params size (MB): 0.05
Estimated Total Size (MB): 0.57
----------------------------------------------------------------
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0',
       grad_fn=<SoftmaxBackward0>) torch.Size([1, 10])

11.训练InceptionNet

model4 = InceptionNet().to(device)
optimizer = torch.optim.SGD(model4.parameters(), lr=0.001)

train(model4, num_epochs, optimizer, 'InceptionNet.pth', device)
Epoch:   1/50, 训练损失:2.302582, 测试损失:0.072063, 测试错误率:87.60%
Epoch:   2/50, 训练损失:2.302133, 测试损失:0.072049, 测试错误率:85.01%
Epoch:   3/50, 训练损失:2.301665, 测试损失:0.072034, 测试错误率:81.74%
Epoch:   4/50, 训练损失:2.301157, 测试损失:0.072016, 测试错误率:76.85%
Epoch:   5/50, 训练损失:2.300600, 测试损失:0.071999, 测试错误率:70.20%
...
Epoch:  46/50, 训练损失:2.219140, 测试损失:0.069466, 测试错误率:39.19%
Epoch:  47/50, 训练损失:2.219096, 测试损失:0.069458, 测试错误率:38.24%
Epoch:  48/50, 训练损失:2.218206, 测试损失:0.069450, 测试错误率:37.16%
Epoch:  49/50, 训练损失:2.217568, 测试损失:0.069384, 测试错误率:36.25%
Epoch:  50/50, 训练损失:2.217033, 测试损失:0.069384, 测试错误率:36.03%

12.定义ResNet残差网络

#3x3卷积
def conv3x3(in_channels, out_channels, stride=1):
    return nn.Conv2d(in_channels, out_channels, kernel_size=3, 
                     stride=stride, padding=1, bias=False)

#残差块
class ResidualBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1, downsample=None):
        super(ResidualBlock, self).__init__()
        self.conv1 = conv3x3(in_channels, out_channels, stride)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = conv3x3(out_channels, out_channels)
        self.bn2 = nn.BatchNorm2d(out_channels)
        self.downsample = downsample
        
    def forward(self, x):
        residual = x
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        out = self.conv2(out)
        out = self.bn2(out)
        if self.downsample:
            residual = self.downsample(x)
        out += residual
        out = self.relu(out)
        return out
    
#ResNet
class ResNet(nn.Module):
    def __init__(self, layers, num_classes=10):
        super(ResNet, self).__init__()
        self.in_channels = 16
        self.conv = conv3x3(1, 16)
        self.bn = nn.BatchNorm2d(16)
        self.relu = nn.ReLU(inplace=True)
        self.layer1 = self.make_layer(16, layers[0])
        self.layer2 = self.make_layer(32, layers[1], 2)
        self.layer3 = self.make_layer(64, layers[2], 2)
        self.avg_pool = nn.AvgPool2d(7)
        self.fc = nn.Linear(64, num_classes)
        
    def make_layer(self, out_channels, blocks, stride=1):
        downsample = None
        if stride != 1 or self.in_channels != out_channels:
            downsample = nn.Sequential(
                conv3x3(self.in_channels, out_channels, stride=stride),
                nn.BatchNorm2d(out_channels))
        layers = []
        layers.append(ResidualBlock(self.in_channels, out_channels, stride, downsample))
        self.in_channels = out_channels
        for i in range(1, blocks):
            layers.append(ResidualBlock(out_channels, out_channels))
        return nn.Sequential(*layers)
    
    def forward(self, x):
        out = self.conv(x)
        out = self.bn(out)
        out = self.relu(out)
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.avg_pool(out)
        out = out.view(out.size(0), -1)
        out = self.fc(out)
        out = F.softmax(out, dim=1)
        return out

13.测试ResNet

from torchsummary import summary
model = ResNet([2,2,2]).to(device)
summary(model, (1,28,28))

x = torch.randn(1,1,28,28).to(device)
out = model(x)
print(out, out.shape)
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 16, 28, 28]             144
       BatchNorm2d-2           [-1, 16, 28, 28]              32
              ReLU-3           [-1, 16, 28, 28]               0
            Conv2d-4           [-1, 16, 28, 28]           2,304
       BatchNorm2d-5           [-1, 16, 28, 28]              32
              ReLU-6           [-1, 16, 28, 28]               0
            Conv2d-7           [-1, 16, 28, 28]           2,304
       BatchNorm2d-8           [-1, 16, 28, 28]              32
              ReLU-9           [-1, 16, 28, 28]               0
    ResidualBlock-10           [-1, 16, 28, 28]               0
           Conv2d-11           [-1, 16, 28, 28]           2,304
      BatchNorm2d-12           [-1, 16, 28, 28]              32
             ReLU-13           [-1, 16, 28, 28]               0
           Conv2d-14           [-1, 16, 28, 28]           2,304
      BatchNorm2d-15           [-1, 16, 28, 28]              32
             ReLU-16           [-1, 16, 28, 28]               0
    ResidualBlock-17           [-1, 16, 28, 28]               0
           Conv2d-18           [-1, 32, 14, 14]           4,608
      BatchNorm2d-19           [-1, 32, 14, 14]              64
             ReLU-20           [-1, 32, 14, 14]               0
           Conv2d-21           [-1, 32, 14, 14]           9,216
      BatchNorm2d-22           [-1, 32, 14, 14]              64
           Conv2d-23           [-1, 32, 14, 14]           4,608
      BatchNorm2d-24           [-1, 32, 14, 14]              64
             ReLU-25           [-1, 32, 14, 14]               0
    ResidualBlock-26           [-1, 32, 14, 14]               0
           Conv2d-27           [-1, 32, 14, 14]           9,216
      BatchNorm2d-28           [-1, 32, 14, 14]              64
             ReLU-29           [-1, 32, 14, 14]               0
           Conv2d-30           [-1, 32, 14, 14]           9,216
      BatchNorm2d-31           [-1, 32, 14, 14]              64
             ReLU-32           [-1, 32, 14, 14]               0
    ResidualBlock-33           [-1, 32, 14, 14]               0
           Conv2d-34             [-1, 64, 7, 7]          18,432
      BatchNorm2d-35             [-1, 64, 7, 7]             128
             ReLU-36             [-1, 64, 7, 7]               0
           Conv2d-37             [-1, 64, 7, 7]          36,864
      BatchNorm2d-38             [-1, 64, 7, 7]             128
           Conv2d-39             [-1, 64, 7, 7]          18,432
      BatchNorm2d-40             [-1, 64, 7, 7]             128
             ReLU-41             [-1, 64, 7, 7]               0
    ResidualBlock-42             [-1, 64, 7, 7]               0
           Conv2d-43             [-1, 64, 7, 7]          36,864
      BatchNorm2d-44             [-1, 64, 7, 7]             128
             ReLU-45             [-1, 64, 7, 7]               0
           Conv2d-46             [-1, 64, 7, 7]          36,864
      BatchNorm2d-47             [-1, 64, 7, 7]             128
             ReLU-48             [-1, 64, 7, 7]               0
    ResidualBlock-49             [-1, 64, 7, 7]               0
        AvgPool2d-50             [-1, 64, 1, 1]               0
           Linear-51                   [-1, 10]             650
================================================================
Total params: 195,450
Trainable params: 195,450
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 2.78
Params size (MB): 0.75
Estimated Total Size (MB): 3.52
----------------------------------------------------------------
tensor([[0.0604, 0.0980, 0.1416, 0.0921, 0.1122, 0.0830, 0.1059, 0.1051, 0.0831,
         0.1187]], device='cuda:0', grad_fn=<SoftmaxBackward0>) torch.Size([1, 10])

14.训练ResNet

model5 = ResNet([2, 2, 2]).to(device)
print(model5)
optimizer = torch.optim.SGD(model5.parameters(), lr=learning_rate)

train(model5, num_epochs, optimizer, 'ResNet.pth', device)
Epoch:   1/50, 训练损失:2.189773, 测试损失:0.063730, 测试错误率:57.93%
Epoch:   2/50, 训练损失:1.909820, 测试损失:0.055589, 测试错误率:27.88%
Epoch:   3/50, 训练损失:1.652832, 测试损失:0.047946, 测试错误率:3.05%
Epoch:   4/50, 训练损失:1.528145, 测试损失:0.047026, 测试错误率:2.33%
Epoch:   5/50, 训练损失:1.506562, 测试损失:0.046566, 测试错误率:1.52%
...
Epoch:  46/50, 训练损失:1.464764, 测试损失:0.045982, 测试错误率:0.67%
Epoch:  47/50, 训练损失:1.464779, 测试损失:0.045961, 测试错误率:0.53%
Epoch:  48/50, 训练损失:1.464784, 测试损失:0.045953, 测试错误率:0.50%
Epoch:  49/50, 训练损失:1.464439, 测试损失:0.045962, 测试错误率:0.56%
Epoch:  50/50, 训练损失:1.464311, 测试损失:0.045947, 测试错误率:0.47%
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值