PyTorch框架实战系列(2)——图像分类器优化

接上篇:PyTorch框架实战系列(1)——CNN图像分类器

对PyTorch教程图像分类器进行优化:(不涉及GPU训练,所以没写可GPU训练的代码)

1、CNN(卷积神经网络)增加了网络深度,卷积层逐层对特征进行提取,从微小特征总结为较大特征,最后由全连接层进行仿射变换。以下是模型架构:

Model(
  (block): Sequential(
    (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (4): ReLU()
    (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (7): ReLU()
    (8): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (9): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (10): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (11): ReLU()
    (12): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classer): Sequential(
    (0): Dropout(p=0.5, inplace=False)
    (1): Linear(in_features=512, out_features=10, bias=True)
  )
)

2、对训练过程进行优化,增加训练次数,并且每在一定数量样本训练后,将模型在测试集上进行验证,当测试集损失值不再收敛时自动停止训练。

3、增加模型测试评估报告,包括准确率、混淆矩阵和各类别的精确度、召回率及F1评分。

CNN.py

import torch.nn as nn


class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.block = nn.Sequential(
            nn.Conv2d(3, 32, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2),

            nn.Conv2d(32, 64, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2),

            nn.Conv2d(64, 64, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2),

            nn.Conv2d(64, 128, kernel_size=3, padding=1),
            nn.BatchNorm2d(128, affine=True),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2),
        )

        self.classer = nn.Sequential(
            nn.Dropout(0.5),
            nn.Linear(in_features=512, out_features=10),
        )

    def forward(self, x):
        x = self.block(x)   # [1, 128, 2, 2]
        x = x.view(x.size()[0], -1) # [1, 512]
        x = self.classer(x) # [1, 10]
        # print(x.size())
        return x


if __name__ == '__main__':
    import torch
    from torch.autograd import Variable

    x = Variable(torch.rand(1, 3, 32, 32))
    model = Model()
    print(model)
    y = model(x)
    print(y)

train.py

# -*- coding: utf-8 -*-
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import torch.optim as optim
import torch.nn as nn
from CNN import Model
import torch
import numpy as np
from sklearn import metrics


def train(save_path, model, trainloader, testloader):
    # 训练模式
    model.train()
    # 指定损失函数和优化器,学习率0.001
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)

    total_batch = 0  # 记录进行到多少batch
    dev_best_loss = float('inf')  # 记录验证集最佳损失率
    last_improve = 0  # 记录上次验证集loss下降的batch数
    flag = False  # 记录是否很久没有效果提升
    # 批次训练
    for epoch in range(20):
        print('Epoch [{}/{}]'.format(epoch + 1, 20))
        # 从迭代器中按mini-batch训练
        for trains, labels in trainloader:
            outputs = model(trains)
            # 模型梯度归零
            model.zero_grad()
            # 损失函数反馈传递
            loss = criterion(outputs, labels)
            loss.backward()
            # 执行优化
            optimizer.step()
            # 每多少轮在测试集上查看训练的效果
            if total_batch % 100 == 0:
                # 获得训练集准确率
                true = labels.data
                predic = torch.max(outputs.data, 1)[1]
                train_acc = metrics.accuracy_score(true, predic)
                # 如果验证集上继续收敛则保存模型参数
                dev_acc, dev_loss = evaluate(model, testloader)
                if dev_loss < dev_best_loss:
                    dev_best_loss = dev_loss
                    torch.save(model.state_dict(), save_path)
                    improve = '*'
                    last_improve = total_batch
                else:
                    improve = ''
                # 训练成果
                msg = 'Iter: {0:>6},  Train Loss: {1:>5.2},  Train Acc: {2:>6.2%},  Val Loss: {3:>5.2},  Val Acc: {4:>6.2%}  {5}'
                print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, improve))
                # 恢复训练
                model.train()
            total_batch += 1
            # 验证集loss超过多少batch没下降,结束训练
            if total_batch - last_improve > 500:
                print("Finished Training...")
                # torch.save(model.state_dict(), save_path)
                flag = True
                break
        if flag:
            break
    # 使用测试集测试评估模型
    model_test(save_path, model, testloader)

# 验证模型
def evaluate(model, dataloader, test=False):
    class_list = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
    # 模型预测模式
    model.eval()
    loss_total = 0
    predict_all = np.array([], dtype=int)
    labels_all = np.array([], dtype=int)
    loss_func = nn.CrossEntropyLoss()
    # 不记录模型梯度
    with torch.no_grad():
        for texts, labels in dataloader:
            outputs = model(texts)
            loss = loss_func(outputs, labels)
            loss_total += loss
            labels = labels.data.numpy()
            predic = torch.max(outputs.data, 1)[1].numpy()
            labels_all = np.append(labels_all, labels)
            predict_all = np.append(predict_all, predic)
    # 验证集准确度
    acc = metrics.accuracy_score(labels_all, predict_all)
    # 给出模型测试结果,评估报告和混淆矩阵
    if test:
        report = metrics.classification_report(labels_all, predict_all, target_names=class_list, digits=4)
        confusion = metrics.confusion_matrix(labels_all, predict_all)
        return acc, loss_total / len(dataloader), report, confusion
    else:
        return acc, loss_total / len(dataloader)

# 测试模型
def model_test(save_path, model, testloader):
    # 加载模型参数
    model.load_state_dict(torch.load(save_path))
    # 预测模式
    model.eval()
    # 模型测试评估
    test_acc, test_loss, test_report, test_confusion = evaluate(model, testloader, test=True)
    msg = 'Test Loss: {0:>5.2},  Test Acc: {1:>6.2%}'
    print(msg.format(test_loss, test_acc))
    print("混淆矩阵...")
    print(test_confusion)
    print("评估报告...")
    print(test_report)


if __name__ == '__main__':
    data_path = './data'    # 数据集
    save_path = './cnn_model.pth'   # 模型保存路径

    # 数据集转化为张量,并标准化
    # input[channel] = (input[channel] - mean[channel]) / std[channel]
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    ])
    # transform = None
    # 下载数据集
    trainset = datasets.CIFAR10(root=data_path, train=True, download=True, transform=transform)
    testset = datasets.CIFAR10(root=data_path, train=False, download=True, transform=transform)

    # 查看数据集大小
    print('trainset', len(trainset))
    print('testset', len(testset))

    batch_size = 100    # mini-batch
    # 构造迭代器
    trainloader = DataLoader(dataset=trainset, batch_size=batch_size, shuffle=True)
    testloader = DataLoader(dataset=testset, batch_size=batch_size, shuffle=True)

    # 迭代器输出的张量
    for sample, label in trainloader:
        print(sample.size(), label.size())
        break

    # 模型实例化
    model = Model()
    train(save_path, model, trainloader, testloader)

输出结果如下:

Files already downloaded and verified
Files already downloaded and verified
trainset 50000
testset 10000
torch.Size([100, 3, 32, 32]) torch.Size([100])
Epoch [1/20]
Iter:      0,  Train Loss:   2.5,  Train Acc: 14.00%,  Val Loss:   2.3,  Val Acc: 13.20%  *
Iter:    100,  Train Loss:   1.5,  Train Acc: 42.00%,  Val Loss:   1.5,  Val Acc: 44.82%  *
Iter:    200,  Train Loss:   1.4,  Train Acc: 47.00%,  Val Loss:   1.3,  Val Acc: 52.96%  *
Iter:    300,  Train Loss:   1.2,  Train Acc: 57.00%,  Val Loss:   1.3,  Val Acc: 53.33%  
Iter:    400,  Train Loss:   1.1,  Train Acc: 61.00%,  Val Loss:   1.1,  Val Acc: 60.16%  *
Epoch [2/20]
Iter:    500,  Train Loss:  0.95,  Train Acc: 60.00%,  Val Loss:   1.0,  Val Acc: 63.07%  *
Iter:    600,  Train Loss:   1.0,  Train Acc: 63.00%,  Val Loss:   1.0,  Val Acc: 62.91%  
Iter:    700,  Train Loss:  0.77,  Train Acc: 69.00%,  Val Loss:   1.0,  Val Acc: 64.62%  *
Iter:    800,  Train Loss:  0.94,  Train Acc: 63.00%,  Val Loss:   1.1,  Val Acc: 62.55%  
Iter:    900,  Train Loss:  0.93,  Train Acc: 63.00%,  Val Loss:  0.96,  Val Acc: 65.69%  *
Epoch [3/20]
Iter:   1000,  Train Loss:  0.83,  Train Acc: 68.00%,  Val Loss:  0.97,  Val Acc: 65.84%  
Iter:   1100,  Train Loss:  0.77,  Train Acc: 71.00%,  Val Loss:  0.94,  Val Acc: 67.48%  *
Iter:   1200,  Train Loss:   1.0,  Train Acc: 65.00%,  Val Loss:  0.86,  Val Acc: 69.97%  *
Iter:   1300,  Train Loss:  0.77,  Train Acc: 74.00%,  Val Loss:  0.86,  Val Acc: 70.56%  
Iter:   1400,  Train Loss:  0.91,  Train Acc: 69.00%,  Val Loss:  0.87,  Val Acc: 69.61%  
Epoch [4/20]
Iter:   1500,  Train Loss:  0.56,  Train Acc: 79.00%,  Val Loss:  0.85,  Val Acc: 70.40%  *
Iter:   1600,  Train Loss:  0.62,  Train Acc: 74.00%,  Val Loss:  0.76,  Val Acc: 73.41%  *
Iter:   1700,  Train Loss:   0.7,  Train Acc: 78.00%,  Val Loss:  0.78,  Val Acc: 73.04%  
Iter:   1800,  Train Loss:  0.75,  Train Acc: 80.00%,  Val Loss:  0.78,  Val Acc: 72.59%  
Iter:   1900,  Train Loss:  0.89,  Train Acc: 63.00%,  Val Loss:  0.77,  Val Acc: 72.92%  
Epoch [5/20]
Iter:   2000,  Train Loss:  0.58,  Train Acc: 81.00%,  Val Loss:  0.75,  Val Acc: 73.93%  *
Iter:   2100,  Train Loss:   0.7,  Train Acc: 75.00%,  Val Loss:  0.77,  Val Acc: 73.61%  
Iter:   2200,  Train Loss:  0.85,  Train Acc: 66.00%,  Val Loss:   0.8,  Val Acc: 72.10%  
Iter:   2300,  Train Loss:  0.67,  Train Acc: 78.00%,  Val Loss:  0.74,  Val Acc: 74.53%  *
Iter:   2400,  Train Loss:  0.86,  Train Acc: 76.00%,  Val Loss:  0.75,  Val Acc: 73.82%  
Epoch [6/20]
Iter:   2500,  Train Loss:  0.78,  Train Acc: 72.00%,  Val Loss:   0.8,  Val Acc: 72.36%  
Iter:   2600,  Train Loss:  0.65,  Train Acc: 76.00%,  Val Loss:  0.75,  Val Acc: 74.33%  
Iter:   2700,  Train Loss:  0.66,  Train Acc: 81.00%,  Val Loss:  0.74,  Val Acc: 74.53%  
Iter:   2800,  Train Loss:  0.66,  Train Acc: 75.00%,  Val Loss:  0.72,  Val Acc: 75.29%  *
Iter:   2900,  Train Loss:  0.75,  Train Acc: 74.00%,  Val Loss:  0.87,  Val Acc: 70.47%  
Epoch [7/20]
Iter:   3000,  Train Loss:   0.6,  Train Acc: 77.00%,  Val Loss:  0.69,  Val Acc: 75.98%  *
Iter:   3100,  Train Loss:  0.57,  Train Acc: 83.00%,  Val Loss:   0.7,  Val Acc: 76.36%  
Iter:   3200,  Train Loss:  0.54,  Train Acc: 78.00%,  Val Loss:  0.72,  Val Acc: 75.94%  
Iter:   3300,  Train Loss:  0.55,  Train Acc: 81.00%,  Val Loss:  0.67,  Val Acc: 77.06%  *
Iter:   3400,  Train Loss:  0.58,  Train Acc: 76.00%,  Val Loss:   0.7,  Val Acc: 75.97%  
Epoch [8/20]
Iter:   3500,  Train Loss:  0.47,  Train Acc: 85.00%,  Val Loss:  0.69,  Val Acc: 76.08%  
Iter:   3600,  Train Loss:  0.69,  Train Acc: 78.00%,  Val Loss:  0.74,  Val Acc: 74.58%  
Iter:   3700,  Train Loss:  0.68,  Train Acc: 83.00%,  Val Loss:  0.73,  Val Acc: 75.18%  
Iter:   3800,  Train Loss:  0.88,  Train Acc: 70.00%,  Val Loss:  0.69,  Val Acc: 76.80%  
Finished Training...
Test Loss:  0.67,  Test Acc: 77.06%
混淆矩阵...
[[794  31  43  10   9   5   8  15  49  36]
 [ 11 918   2   4   3   3   1   0  10  48]
 [ 65   6 706  39  71  46  35  18   5   9]
 [ 21  10  83 533  60 175  60  32  11  15]
 [ 21   2  81  38 747  34  19  51   6   1]
 [ 13   4  54 103  43 717  18  38   2   8]
 [  5   5  77  47  38  24 790   6   6   2]
 [  7   3  41  22  50  54   2 809   2  10]
 [ 56  41  14  12   5   1   2   2 837  30]
 [ 18  85   6   6   4   3   2   8  13 855]]
评估报告...
              precision    recall  f1-score   support

       plane     0.7854    0.7940    0.7897      1000
         car     0.8308    0.9180    0.8722      1000
        bird     0.6378    0.7060    0.6701      1000
         cat     0.6548    0.5330    0.5877      1000
        deer     0.7252    0.7470    0.7360      1000
         dog     0.6751    0.7170    0.6954      1000
        frog     0.8431    0.7900    0.8157      1000
       horse     0.8264    0.8090    0.8176      1000
        ship     0.8895    0.8370    0.8624      1000
       truck     0.8432    0.8550    0.8491      1000

   micro avg     0.7706    0.7706    0.7706     10000
   macro avg     0.7711    0.7706    0.7696     10000
weighted avg     0.7711    0.7706    0.7696     10000

可以看到,准确率从55%提升到了77%,优化效果明显。

 

还可以使用学习率衰减的方法,应该能够让模型更快收敛。

对比项目实践中图像分类的准确率,本例的准确率还是比较低的。主要因为这个数据集32*32的分辨率还是太小了,很多细微特征无法提取。

  • 5
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值