pytorch分类cifar-10(多模型对比)

2020/3/10更新一点:增加了CVPR2020的华为诺亚方舟实验室的一款轻量型网络
GhostNet: More Features from Cheap Operations


之前沿着这样的路线:AlexNet,VGG,GoogLeNet v1,ResNet,DenseNet把主要的经典的分类网络的paper看完了,主要是人们发现很深的网络很难train,知道之后出现的Batch Normalization和ResNet才解决了深层网络的训练问题,因为网络深了之后准确率上升,所以之后的网络效果在解决了train的问题之后,就明显比之前的好,而且通过Bottleneck的结构,也能够控制参数量,减小过拟合的风险,同时计算量也不会失控。
看完paper,就想实操一下,复现ImageNet有难度的话,那就试试cifar-10吧,找了一个github代码跑一跑,顺便也是当作一个分类任务代码的模板吧,其实这个也可以去pytorch examples里面看那个MNIST的,但是MNIST太简单了,所以就找了下面这个:

质量还可以的,主要通过改pytorch官方网络中的一些参数(因为ImageNet针对的是输入分辨率为224×224的图片,这里是3×32×32的),构建了一些经典的网络,用来分类cifar-10。

A little supplement

我就改了一点点的地方,使她能自动调整学习率了,不用manually去调整(改动的地方非常少),然后用torchsummary来看看每个模型的参数量,这个项目其实非常小,只有一个主文件,但是github上已经有1.7k的stars。

ps:在用VGG16,ResNet18,ResNet50的时候我设置了400个epoch,后来感觉实在太久了,况且后面的提升也不大,所以后面的训练都设置为300个epoch,当然精度可能会受到一点影响

Main code

这里主要放一下主训练函数,其他模型的构建都可以参考torchvision.models里面的源码,虽然没有PreActResNet和DPN,但是PreActResNet和ResNet的搭建非常相似,只是是一个“预激活”的版本,所以可以参考ResNet代码构建,DPN我没看过,但是感觉看一遍论文中的结构部分,参照ResNet代码,构建也不是很难。

'''Train CIFAR10 with PyTorch.'''
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn

import torchvision
import torchvision.transforms as transforms

import os
import argparse

from models import *
from utils import progress_bar


parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help='learning rate')
parser.add_argument('--trainbs', default=128, type=int, help='trainloader batch size')
parser.add_argument('--testbs', default=100, type=int, help='testloader batch size')
parser.add_argument('--resume', '-r', action='store_true', help='resume from checkpoint')
args = parser.parse_args()

device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0  # best test accuracy
start_epoch = 0  # start from epoch 0 or last checkpoint epoch

# Data
print('==> Preparing data..')
transform_train = transforms.Compose([
    transforms.RandomCrop(32, padding=4),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

transform_test = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=args.trainbs, shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=args.testbs, shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

# Model
print('==> Building model..')
# net = VGG('VGG16')
# net = ResNet18()
net = PreActResNet18()
# net = GoogLeNet()
# net = DenseNet121()
# net = ResNeXt29_2x64d()
# net = ResNeXt29_32x4d()
# net = MobileNet()
# net = MobileNetV2()
# net = DPN92()
# net = ShuffleNetG2()
# net = SENet18()
# net = ShuffleNetV2(1)
# net = EfficientNetB0()
net_name = net.name
save_path = './checkpoint/{0}_ckpt.pth'.format(net.name)
net = net.to(device)
if device == 'cuda':
    net = torch.nn.DataParallel(net)
    cudnn.benchmark = True

if args.resume:
    # Load best checkpoint trained last time.
    print('==> Resuming from checkpoint..')
    assert os.path.isdir('checkpoint'), 'Error: no checkpoint directory found!'
    checkpoint = torch.load(save_path)
    net.load_state_dict(checkpoint['net'])
    best_acc = checkpoint['acc']
    start_epoch = checkpoint['epoch']

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=70, gamma=0.1)

# Training
def train(epoch):
    print('\nEpoch: %d' % epoch)
    net.train()
    train_loss = 0
    correct = 0
    total = 0
    for batch_idx, (inputs, targets) in enumerate(trainloader):
        inputs, targets = inputs.to(device), targets.to(device)
        optimizer.zero_grad()
        outputs = net(inputs)
        loss = criterion(outputs, targets)
        loss.backward()
        optimizer.step()

        train_loss += loss.item()
        _, predicted = outputs.max(1)
        total += targets.size(0)
        correct += predicted.eq(targets).sum().item()

        progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
            % (train_loss/(batch_idx+1), 100.*correct/total, correct, total))

def test(epoch):
    global best_acc
    net.eval()
    test_loss = 0
    correct = 0
    total = 0
    with torch.no_grad():
        for batch_idx, (inputs, targets) in enumerate(testloader):
            inputs, targets = inputs.to(device), targets.to(device)
            outputs = net(inputs)
            loss = criterion(outputs, targets)

            test_loss += loss.item()
            _, predicted = outputs.max(1)
            total += targets.size(0)
            correct += predicted.eq(targets).sum().item()

            progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
                % (test_loss/(batch_idx+1), 100.*correct/total, correct, total))

    # Save checkpoint.
    acc = 100.*correct/total
    if acc > best_acc:
        print('Saving ' + net_name + ' ..')
        state = {
            'net': net.state_dict(),
            'acc': acc,
            'epoch': epoch,
        }
        if not os.path.isdir('checkpoint'):
            os.mkdir('checkpoint')
        torch.save(state, save_path)
        best_acc = acc


for epoch in range(start_epoch, start_epoch+300):
    # In PyTorch 1.1.0 and later,
    # you should call them in the opposite order:
    # `optimizer.step()` before `lr_scheduler.step()`
    train(epoch)
    test(epoch)
    scheduler.step()  # 每隔100 steps学习率乘以0.1

print("\nTesting best accuracy:", best_acc)

Accuracy

Note:下面的Total params,Estimated Total Size (MB),Trainable params,Params size (MB)均是指改过之后适用cifar-10的相对应的网络模型,不是原ImageNet的网络模型,且batch size大小统一为128,可根据自己的硬件条件进行调整。

ModelMy Acc.Total paramsEstimated Total Size (MB)Trainable paramsParams size (MB)Saved model size (MB)GPU memory usage(MB)
GhostNet89.60%449,0628.49449,0621.711.82847
MobileNetV292.64%2,296,92236.142,296,9228.768.963107
VGG1694.27%14,728,26662.7714,728,26656.1859.01229
PreActResNet1894.70%11,171,14653.3811,171,14642.6144.71665
ResNeXt29(2x64d)95.09%9,128,77899.849,128,77834.8236.75779
ResNet5095.22%23,520,842155.8623,520,84289.7294.45723
DPN9295.42%34,236,634243.5034,236,634130.60137.510535
ResNeXt29(32x4d)95.49%4,774,21883.224,774,21818.2119.25817
DenseNet12195.55%6,956,298105.056,956,29826.5428.38203
ResNet1895.59%11,173,96253.8911,173,96242.6344.81615
ResNet10195.62%42,512,970262.3142,512,970162.17170.68857

以图为证:
GhostNet
GhostNet训练结果
MobileNetV2
MobileNetV2训练结果
VGG16
vgg16训练结果
PreActResNet18
PreActResNet18训练结果
ResNeXt29(2x64d)
ResNeXt29(2x64d)训练结果
ResNet50
ResNet50训练结果
DPN92DPN92训练结果
ResNeXt29(32x4d)
ResNeXt29(32x4d)训练结果
DenseNet121
DenseNet121训练结果
ResNet18
ResNet18训练结果
ResNet101
ResNet101训练结果

Pre-trained models

在这里我提供了上面提到的保存的模型,如果你们想测试或者modify一下都可以:
Baidu Drive
Google Drive
ps:
PreActResNet18在一开始训练的时候,初始学习率设置为0.1的时候,会发现不收敛的现象,所以只能将初始学习率设置为0.01
GhostNet并没有直接再CIFAR-10上训练,因为其参数太小了,所以performance不是特别好,多替换别的CNN的卷积操作,作为即插即用的module。lr=0.4, weight_decay=4e-5.

评论 20
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

laizi_laizi

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值