用EfficientNet训练数据集,对测试数据进行分类

用EfficientNet训练数据集,对测试数据进行分类

使用EfficientNet

Google于2019年发表的EfficientNet,其独特的卷积结构令其在速度和精度上令人拍案叫绝。
*EfficientNet的优点及其与ResNet等网络的比较----->知乎
*github上EfficientNet官方代码EfficientNet-PyTorch
*关于EfficientNet预训练模型的下载,请点击此处(若github上下载不了,也可进我博客下载。其实还有一种方法是用代码调用,但并不推荐此方式)
下面实例中用到的EfficientNet预训练模型如下(可自行选择其中的一种)

  • efficientnet-b0-355c32eb.pth
  • efficientnet-b1-f1951068.pth
  • efficientnet-b2-8bb594d6.pth
  • efficientnet-b3-5fb5a3c3.pth
  • efficientnet-b4-6ed6700e.pth
  • efficientnet-b5-b6417697.pth
  • efficientnet-b6-c76e70fd.pth
  • efficientnet-b7-dcc49843.pth

数据集的准备

可准备自己的图片数据集,这里以国内某竞赛公开的非正式数据集为例
*20种海洋生物数据集 点此下载(访问码:p9qi)

数据集整理

利用代码将data文件夹里面的图片根据train.csv和test.csv里的数据信息分别整理到train与test文件夹中,文件路径图例如下:

  • train
    • 0
      • xxxxx.jpg
      • xxxxx.jpg
      • … …
    • 1
    • 2
    • … …
    • 19
  • test
    • xxxxx.jpg
    • xxxxx.jpg
    • … …

在训练前运行convert.py

import pandas as pd
import shutil
import os

def convert_dataset(csv_filename, pre_path, root_path):
    data_file = pd.read_csv(csv_filename)
    id_tuple = tuple(data_file["FileID"].values.tolist())
    classes_tuple = tuple(data_file["SpeciesID"].values.tolist())

    try:
        for i in range(len(id_tuple)):
            new_path = os.path.join(root_path, str(classes_tuple[i]))
            if not os.path.exists(new_path):
                os.makedirs(new_path)
            shutil.copy(os.path.join(pre_path, id_tuple[i]+".jpg"),os.path.join(new_path,id_tuple[i]+".jpg"))
    except:
        print("train_convert_match error")


def test_convert(csv_filename, pre_path, new_path):
    data_file = pd.read_csv(csv_filename)
    id_tuple = tuple(data_file["FileID"].values.tolist())
    try:
    	if not os.path.exists(new_path):
        	os.makedirs(new_path)
        for i in range(len(id_tuple)):
            shutil.copy(os.path.join(pre_path, id_tuple[i]+".jpg"),os.path.join(new_path,id_tuple[i]+".jpg"))
    except:
        print("test_convet_match error")
            
pre_path = "/af2020cv-2020-05-09-v5-dev/data"  #图片文件之前所在文件夹
train_root_path = "/image/train"  #待训练图片存储文件夹的路径
test_root_path = "/image/test"    #训练时验证集存储文件夹的路径
train_filename = '/af2020cv-2020-05-09-v5-dev/training.csv'   #待读取的训练csv文件
test_filename = '/af2020cv-2020-05-09-v5-dev/test.csv'   #待读取的验证csv文件

convert_dataset(train_filename, pre_path, train_root_path)
test_convert(test_filename, pre_path, test_root_path)
print("dataset converting is finished!")

模型训练

运行 classify.py进行训练
此代码包含训练、保存模型、测试数据、保存分类结果四种功能,已在必要处提供注释便于理解。
看懂以下代码之前建议先阅读这位博主的文章,便于消化食用—>PyTorch学习之路

from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torchvision import datasets, models, transforms
import time
import os
import pandas as pd
from PIL import Image, ImageDraw, ImageFont
from efficientnet_pytorch.model import EfficientNet

use_gpu = torch.cuda.is_available()
os.environ["CUDA_VISIBLE_DEVICES"] = "0"    #设置当前使用的GPU设备仅为0号设备  设备名称为'/gpu:0'
data_dir = '/image'
batch_size = 64		#批次大小(通过loaddata函数打包)
lr = 0.01  		  #学习率
momentum = 0.9     #动量
num_epochs = 20     #训练轮次
input_size = 350    #数据集图像处理大小
class_num = 20     #分多少个类别
net_name = 'efficientnet-b6'	#需要用到的EfficientNet预训练模型名称
i=0
Species_id = []

def loaddata(data_dir, batch_size, set_name, shuffle):
    #对数据进行数据增强
    data_transforms = {
        'train': transforms.Compose([
            transforms.CenterCrop(input_size),
            transforms.RandomAffine(degrees=0, translate=(0.05, 0.05)),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])
        ]),
        'test': transforms.Compose([
            transforms.Resize(input_size),
            transforms.CenterCrop(input_size),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ]),
    }

    image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in [set_name]}
    # num_workers=0 if CPU else =1
    #创建了一个 batch,生成真正网络的输入
    dataset_loaders = {x: torch.utils.data.DataLoader(image_datasets[x],
                                                      batch_size=batch_size,
                                                      shuffle=shuffle, num_workers=1) for x in [set_name]}
    data_set_sizes = len(image_datasets[set_name])
    return dataset_loaders, data_set_sizes
    
def train_model(model_ft, criterion, optimizer, lr_scheduler, num_epochs=50):
    train_loss = []
    loss_all = []
    acc_all = []
    since = time.time()
    best_model_wts = model_ft.state_dict() #存放训练过程中需要学习的权重和偏执系数
    best_acc = 0.0
    #作用是启用batch normalization和drop out。
    model_ft.train(True)
    for epoch in range(num_epochs):
        dset_loaders, dset_sizes = loaddata(data_dir=data_dir, batch_size=batch_size, set_name='train', shuffle=True)
        # print(dset_loaders)
        print('Data Size', dset_sizes)
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)
        optimizer = lr_scheduler(optimizer, epoch)
        running_loss = 0.0
        running_corrects = 0
        count = 0

        for data in dset_loaders['train']:
            #inputs和labels的个数为64个,等于一个batch,这个循环执行21,1323/64=20.67
            inputs, labels = data
            #torch.LongTensor是64位整型
            labels = torch.squeeze(labels.type(torch.LongTensor))#主要对数据的维度进行压缩,去掉维数为1的的维度
            if use_gpu:
                #Variable可以看成是tensor的一种包装,其不仅包含了tensor的内容,还包含了梯度等信息,因此在神经网络中常常用Variable数据结构。
                inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
             
            else:
                inputs, labels = Variable(inputs), Variable(labels)

            outputs = model_ft(inputs)
            loss = criterion(outputs, labels)
            #取概率最大的值
            _, preds = torch.max(outputs.data, 1)

            #print(_, preds)
            #optimizer.zero_grad()意思是把梯度置零,也就是把loss关于weight的导数变成0.
            optimizer.zero_grad()
            #计算得到loss后就要回传损失
            loss.backward()
            #回传损失过程中会计算梯度,然后需要根据这些梯度更新参数
            optimizer.step()

            count += 1
#            if count % 30 == 0 or outputs.size()[0] < batch_size:
#                print('Epoch:{}: loss:{:.3f}'.format(epoch, loss.item()))
#                train_loss.append(loss.item())

            running_loss += loss.item() * inputs.size(0)
            running_corrects += torch.sum(preds == labels.data)

        epoch_loss = running_loss / dset_sizes
        epoch_acc = running_corrects.double() / dset_sizes
        loss_all.append(int(epoch_loss*100))
        acc_all.append(int(epoch_acc*100))
        # print(epoch_loss)

        print('Loss: {:.4f} Acc: {:.4f}'.format(
            epoch_loss, epoch_acc))

        if epoch_acc > best_acc:
            best_acc = epoch_acc
            best_model_wts = model_ft.state_dict()
        if epoch_acc == 1.0:
            break
    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    
    # 保存训练模型
    save_dir = data_dir + '/model'
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
    model_ft.load_state_dict(best_model_wts)
    model_out_path = save_dir + "/" + net_name + '.pth'
    torch.save(best_model_wts, model_out_path)

    return train_loss, best_model_wts,model_ft
    
def exp_lr_scheduler(optimizer, epoch, init_lr=0.01, lr_decay_epoch=10):
    """Decay learning rate by a f#            model_out_path ="./model/W_epoch_{}.pth".format(epoch)
#            torch.save(model_W, model_out_path) actor of 0.1 every lr_decay_epoch epochs."""
    lr = init_lr * (0.8**(epoch // lr_decay_epoch))
    print('LR is set to {}'.format(lr))
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

    return optimizer
# train
pth_map = {
    'efficientnet-b0': 'efficientnet-b0-355c32eb.pth',
    'efficientnet-b1': 'efficientnet-b1-f1951068.pth',
    'efficientnet-b2': 'efficientnet-b2-8bb594d6.pth',
    'efficientnet-b3': 'efficientnet-b3-5fb5a3c3.pth',
    'efficientnet-b4': 'efficientnet-b4-6ed6700e.pth',
    'efficientnet-b5': 'efficientnet-b5-b6417697.pth',
    'efficientnet-b6': 'efficientnet-b6-c76e70fd.pth',
    'efficientnet-b7': 'efficientnet-b7-dcc49843.pth',
}
# 离线加载预训练
model_ft = EfficientNet.from_name(net_name)
net_weight = '这里填写存储EfficientNet预训练模型所在的文件夹名称' + pth_map[net_name]
state_dict = torch.load(net_weight)
model_ft.load_state_dict(state_dict)

# 修改全连接层
num_ftrs = model_ft._fc.in_features
model_ft._fc = nn.Linear(num_ftrs, class_num)

criterion = nn.CrossEntropyLoss()   #获得交叉熵损失
if use_gpu:
    model_ft = model_ft.cuda()
    criterion = criterion.cuda()
    
optimizer = optim.SGD((model_ft.parameters()), lr=lr,
                      momentum=momentum, weight_decay=0.0002)

train_loss, best_model_wts,model_ft= train_model(model_ft, criterion, optimizer, exp_lr_scheduler, num_epochs=num_epochs)

# test
model_ft.load_state_dict(best_model_wts)

data_transforms = transforms.Compose([
    transforms.Resize(350),
    transforms.CenterCrop(350),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])


def get_key(dct, value):
    return [k for (k, v) in dct.items() if v == value]
    
Species_id = []
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

# find the mapping of folder-to-label

data = datasets.ImageFolder('/image/train')
mapping = data.class_to_idx
print(mapping)
    
# start testing

data_file = pd.read_csv('/af2020cv-2020-05-09-v5-dev/test.csv')
File_id = data_file["FileID"].values.tolist()

for i in range(len(File_id)):
    test_dir = File_id[i] + '.jpg'
    img_dir = '/image/test/'+test_dir
    # load image
    img = Image.open(img_dir)
    inputs = data_transforms(img)
    inputs.unsqueeze_(0)

    if use_gpu:
    	model = model_ft.cuda() # use GPU
    else:
    	model = model_ft
    model.eval()
    if use_gpu:
    	inputs = Variable(inputs.cuda()) # use GPU
    else:
    	nputs = Variable(inputs)

    # forward
    outputs = model(inputs)
    _, preds = torch.max(outputs.data, 1)
    class_name = get_key(mapping, preds.item())
    class_name = '%s' % (class_name) 
    class_name = class_name[2:-2]
    
	print(img_dir)
    print('prediction_label:', class_name)
    print(30*'--')
    Species_id.append(class_name)

test = pd.DataFrame({'FileId':File_id,'SpeciesID':Species_id}) #将机器分类结果存储在.csv文件中
test.to_csv('result.csv',index = None,encoding = 'utf8')

输出结果

result.csv:

FileIDSpeciesID
fbc0970b9880747baf4db6f634f15ba04
6d798919d67fc50b5e2775c6f368935910
ea7e1536f50d813738f06bb56ddd30a86
5f607957dcd4eb09c22fc1d99b01f10818
be96543509ec47e1d193c52fd6e3b8689
… …… …

Accuracy

在这里插入图片描述

以EfficientNet-b6为例,训练20轮,最后结果 Train accuracy99.96% Test accuracy94.68%

其他

参考文献
Mingxing Tan, Quoc V. Le.EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.ICML,2019.
教程实例
PyTorch官网: TRANSFER LEARNING FOR COMPUTER VISION TUTORIAL
参考文章
1. EfficientNet 训练测试自己的分类数据集 博主:乱觉先森
2. PyTorch学习之路(level1)——训练一个图像分类模型 博主:AI之路
*感谢以上博主分享的文章,让我对使用EfficientNet分类图像有了更深层次的理解

以上代码改几个数据,就基本上可以实现各类图像的识别,用EfficientNet速度快准确性高,用服务器上的GPU大概半分钟就可以训练一个轮次,有些数据尽量别调太高,内存可能会吃不消

评论 21
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值