细粒度分类:Hierarchical Bilinear Pooling(HBP),分级双线性池化(二)


前言

续接 https://blog.csdn.net/DaZheng121/article/details/124337239
参考学长的建议,对batch size过大出现的CUDA问题做了改进,给出了一种较低配置训练batch size较大模型的方法。


一、梯度累加变相扩大batch size

参考:
https://zhuanlan.zhihu.com/p/445009191
https://www.cnblogs.com/lart/p/11628696.html

1.基础知识

首先,要先理解loss.backward()、optimizer.step()和optimizer.zero_grad()
在这里插入图片描述optimizer.zero_grad()意思是把梯度置零,也就是把loss关于weight的导数变成0

2.知识补充

backward计算后,默认会释放计算图(bp算法会需要这些信息),而这些计算图就是网络计算的一些中间结果,那么一次回传计算完梯度后,它会将这些梯度保留在模型每一层的属性里,计算图得到释放,你又有显存(内存)可以用,再跑多个batch梯度一样回传,存到模型的每一层属性里,然后再更新就可以实现梯度累加了。

3.梯度累加的实现

for i,(images,target) in enumerate(train_loader):
    # 1. input output
    images = images.cuda(non_blocking=True)
    target = torch.from_numpy(np.array(target)).float().cuda(non_blocking=True)
    outputs = model(images)
    loss = criterion(outputs,target)

    # 2.1 loss regularization
    loss = loss/accumulation_steps
    # 2.2 back propagation
    loss.backward()

    # 3. update parameters of net
    if((i+1)%accumulation_steps)==0:
        # optimizer the net
        optimizer.step()        # update parameters of net
        optimizer.zero_grad()   # reset gradient

1、获取 loss:输入图像和标签,通过infer计算得到预测值,计算损失函数;
2、loss.backward() 反向传播,计算当前梯度;
3、多次循环步骤 1-2,不清空梯度,使梯度累加在已有梯度上;
4、梯度累加了一定次数后,先optimizer.step() 根据累计的梯度更新网络参数,然后optimizer.zero_grad() 清空过往梯度,为下一波梯度累加做准备;

总结来说:梯度累加就是,每次获取1个batch的数据,计算1次梯度,梯度不清空,不断累加,累加一定次数后,根据累加的梯度更新网络参数,然后清空梯度,进行下一次循环。

一定条件下,batchsize 越大训练效果越好,梯度累加则实现了 batchsize 的变相扩大,如果accumulation_steps 为 8,则batchsize ‘变相’ 扩大了8倍,使用时需要注意,学习率也要适当放大。

二、基于pytorch的实现

1.Train2.py

除了梯度累加变相增大batch size之外,与HBP(一)的不同在于调小了patience和end_patient。

代码如下:

import torch
import torch.nn as nn
import torch.optim
import torch.utils.data
import torchvision
import os
import NetModel
import CUB200

# base_lr = 0.1
# batch_size = 24
num_epochs = 200
weight_decay = 1e-8
num_classes = 200
cub200_path = 'E:/DataSets/CUB_200_2011/'
save_model_path = 'model_saved'

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

fc = 1
ft = 2


def train(mode, Model, model_path, base_lr, batch_size, step_num):
    # load the network.
    model = Model
    model = model.to(device)
    param_to_optim = []
    if mode == fc:
        # Load the fc parameter.
        for param in model.parameters():
            if not param.requires_grad:
                continue
            param_to_optim.append(param)
        optimizer = torch.optim.SGD(param_to_optim, lr=base_lr, momentum=0.9, weight_decay=weight_decay)
    elif mode == ft:
        # Load the saved model.
        model.load_state_dict(torch.load(os.path.join(save_model_path,
                                                      model_path),
                                         map_location=lambda storage, loc: storage))
        # Load all parameters.
        # param_to_optim = model.parameters()
        optimizer = torch.optim.SGD(model.parameters(), lr=base_lr, momentum=0.9, weight_decay=weight_decay)
        # for param in model.parameters():
        #     param_to_optim.append(param)
    # optimizer = torch.optim.SGD(model.parameters(), lr=base_lr, momentum=0.9, weight_decay=weight_decay)
    criterion = nn.CrossEntropyLoss()

    # If the incoming value does not increase for 3 consecutive times, the learning rate will be reduced by 0.1 times
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.1, patience=2, verbose=True)

    # Calculate the mean and variance of each channel of sample data,
    # run it only once, and record the corresponding value
    # get_statistic()

    # Mean and variance of CUB_200 dataset are [0.4856, 0.4994, 0.4324], [0.1817, 0.1811, 0.1927]

    # Set up the data preprocessing process
    train_transform = torchvision.transforms.Compose([torchvision.transforms.Resize(448),
                                                      torchvision.transforms.CenterCrop(448),
                                                      torchvision.transforms.RandomHorizontalFlip(),
                                                      torchvision.transforms.ToTensor(),
                                                      torchvision.transforms.Normalize([0.4856, 0.4994, 0.4324],
                                                                                       [0.1817, 0.1811, 0.1927])])
    test_transform = torchvision.transforms.Compose([torchvision.transforms.Resize(448),
                                                     torchvision.transforms.CenterCrop(448),
                                                     torchvision.transforms.ToTensor(),
                                                     torchvision.transforms.Normalize([0.4856, 0.4994, 0.4324],
                                                                                      [0.1817, 0.1811, 0.1927])])

    train_data = CUB200.CUB200(cub200_path, train=True, transform=train_transform)
    test_data = CUB200.CUB200(cub200_path, train=False, transform=test_transform)

    train_loader = torch.utils.data.DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)
    test_loader = torch.utils.data.DataLoader(dataset=test_data, batch_size=batch_size, shuffle=False)

    print('Start training ...')
    best_acc = 0.
    best_epoch = 0
    end_patient = 0
    training_accuracy = []
    testing_accuracy = []
    epochs = []
    size = len(train_loader.dataset)
    for epoch in range(num_epochs):
        correct = 0
        total = 0
        epoch_loss = 0.
        for i, (images, labels) in enumerate(train_loader):
            images = images.to(device)
            labels = labels.to(device)

            outputs = model(images)
            loss = criterion(outputs, labels)
            loss = loss/step_num
            loss.backward()
            if (i+1) % step_num == 0:
                optimizer.step()
                optimizer.zero_grad()

            epoch_loss += loss
            _, prediction = torch.max(outputs.data, 1)
            correct += (prediction == labels).sum().item()
            total += labels.size(0)
            if (i+1) % 480 == 0:
                print('Epoch %d: Iter %d/%d, Loss %g' % (epoch + 1, (i+1) * batch_size, size, loss))
        train_acc = 100 * correct / total
        print('Testing on test dataset...')
        test_acc = test_accuracy(model, test_loader)
        print('Epoch [{}/{}] Loss: {:.4f} Train_Acc: {:.4f}  Test1_Acc: {:.4f}'
              .format(epoch + 1, num_epochs, epoch_loss, train_acc, test_acc))
        scheduler.step(test_acc)
        training_accuracy.append(train_acc)
        testing_accuracy.append(test_acc)
        epochs.append(epoch)
        if test_acc > best_acc:
            if mode == fc:
                model_file = os.path.join(save_model_path, 'CUB_200_train_fc_epoch_%d_acc_%g.pth' %
                                          (best_epoch, best_acc))
                if os.path.isfile(model_file):
                    os.remove(os.path.join(save_model_path, 'CUB_200_train_fc_epoch_%d_acc_%g.pth' %
                                           (best_epoch, best_acc)))
                end_patient = 0
                best_acc = test_acc
                best_epoch = epoch + 1
                print('The accuracy is improved, save model')
                torch.save(model.state_dict(), os.path.join(save_model_path,
                                                            'CUB_200_train_fc_epoch_%d_acc_%g.pth' %
                                                            (best_epoch, best_acc)))
            elif mode == ft:
                model_file = os.path.join(save_model_path, 'CUB_200_train_ft_epoch_%d_acc_%g.pth' %
                                          (best_epoch, best_acc))
                if os.path.isfile(model_file):
                    os.remove(os.path.join(save_model_path, 'CUB_200_train_ft_epoch_%d_acc_%g.pth' %
                                           (best_epoch, best_acc)))
                end_patient = 0
                best_acc = test_acc
                best_epoch = epoch + 1
                print('The accuracy is improved, save model')
                torch.save(model.state_dict(), os.path.join(save_model_path,
                                                            'CUB_200_train_ft_epoch_%d_acc_%g.pth' %
                                                            (best_epoch, best_acc)))
        else:
            end_patient += 1
            print('Impatient: ', end_patient)

        # If the accuracy of the 10 iteration is not improved, the training ends
        if end_patient >= 8:
            break
    print('After the training, the end of the epoch %d, the accuracy %g is the highest' % (best_epoch, best_acc))
    print('epochs:', epochs)
    print('training accuracy:', training_accuracy)
    print('testing accuracy:', testing_accuracy)


def test_accuracy(model, test_loader):
    model.eval()
    with torch.no_grad():
        correct = 0
        total = 0
        for images, labels in test_loader:
            images = images.to(device)
            labels = labels.to(device)

            outputs = model(images)

            _, prediction = torch.max(outputs.data, 1)
            correct += (prediction == labels).sum().item()
            total += labels.size(0)
        model.train()
        return 100 * correct / total

2.main.py

由于我的电脑配置,在训练时,fc mode的batch size最大为6,ft mode的batch size最大为1,所以在变相增大batch size时要同时修改对应的step_num和batch size。

代码如下:

import Train
import Train2
import NetModel

step_num1 = 8
step_num2 = 16
model = NetModel.HBP(pretrained=False)
model_path = 'CUB_200_train_fc_epoch_42_acc_79.6859.pth'
base_lr = 0.1
batch_size = 24

fc = 1
fc_base_lr = 0.1
fc_batch_size = int(6*step_num1/step_num1)     # max=6
ft = 2
ft_base_lr = 0.001
ft_batch_size = int(step_num2/step_num2)  # max=1

mode = ft
if mode == fc:
    model = NetModel.HBP(pretrained=True)
    base_lr = fc_base_lr
    batch_size = fc_batch_size
    Train2.train(mode=mode, Model=model, model_path=model_path, base_lr=base_lr,
                 batch_size=batch_size, step_num=step_num1)
elif mode == ft:
    base_lr = ft_base_lr
    batch_size = ft_batch_size
    Train2.train(mode=mode, Model=model, model_path=model_path, base_lr=base_lr,
                 batch_size=batch_size, step_num=step_num2)

三、结果分析

1.在Train.py的训练出的基础上进行的微调

Train.py见 https://blog.csdn.net/DaZheng121/article/details/124337239

初始学习率被设置为0.0001
累加梯度后,batch size = 8

可以看出微调训练的效果并不乐观,可能是由于在训练集上的准确度已经达到100%,也有可能是由于初始学习率过小导致无法跳出局部最优解,但是训练时间太久了所以我打算直接调高batch size去提高模型的泛化能力。

2.基于Train2.py的训练结果

在这里插入图片描述

相较于1,泛化性有所改善,总体准确度提高了2.4%。


总结

本文给出了一种在显存较小的设备上训练batch size较大的模型,也是一种解决CUDA out of memory导致不得不减小batch size的办法。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Robust Da

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值