简单尝试使用迁移学习进行图像分类(VGG、Resnet)

前言

迁移学习指的是保存已有问题的解决模型,并将其利用在其他不同但相关问题上。 比如说,训练用来辨识汽车的模型也可以被用来提升识别卡车的能力。很多情况下迁移学习能够简化或降低模型构建的难度,甚至还能取得不错的准确度。
本文将针对一个小的图片数据集,使用PyTorch进行迁移学习演示,包括如何使用预训练模型,并将结果自己搭建的卷积神经网络模型进行性能比较。

本文在这篇原文的基础上,进行的。数据来源于原文,实际代码做了修改。

数据集介绍

考虑到VGG16要求图像的形状为(3,224,224),即像素为224x224的彩色图像,因为我准备用这个数据集进行实验。所谓的应急车辆包括:警车、消防车和救护车。在数据集中有一个emergency_train.csv,用来存放训练样本的标签。

数据集下载: 百度云下载链接 提取码: quia

实验环境

ubuntu系统:版本16.4

Pytorch:版本

显卡:1660s 6G

选取预训练模型

预训练模型是由某个人或团队为解决特定问题而已经设计和训练好的模型。预训练模型在深度学习项目中非常有用,因为并非所有人都拥有足够多的算力。我们需要使用本地机器,因此预训练的模型就可以节约很多时间。预训练的模型通过将其权重和偏差矩阵传递给新模型来得以共享他们训练好的参数。因此,在进行迁移学习之前,我要首先选择一个合适的预训练模型,然后将其权重和偏差矩阵传递给新模型。针对不同的深度学习任务可能有很多预训练模型可用,现在针对我要做的这个任务确定哪种模型最适合,根据我们的数据集介绍,我会选择VGG16在ImageNet上的预训练模型,而不是在MNIST上的预训练模型,因为我们的数据集中包含车辆图像,ImageNet中具有丰富的车辆图像,因此前者应该更为合理。总之,选择预训练模型时不是考虑参数量和性能表现,而是考虑任务间的相关性以及数据集的相似程度。

导入依赖库

# 导入数据分析的工具
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import train_test_split

# 导入读取和展示图片的工具
from skimage import io
from skimage.transform import resize
import matplotlib.pyplot as plt
from torchvision import transforms

# 导入模型构建的工具
import torch
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam
from sklearn import metrics

# 导入迁移学习的工具
from torchvision import models

数据处理

数据和标签文件

读取包含图像名称和相应标签的emergency_train.csv文件,并查看内容:

def read_info():
    df = pd.read_csv("data/emergency/emergency_train.csv")
    return df

df = read_info()

该csv文件中包含两列:

  • image_names: 代表数据集中所有图像的名称
  • Emergency_or_no: 指定特定图像属于紧急类别还是非紧急类别。0表示图像是非紧急车辆,1表示紧急车辆

加载图像

# 加载训练图像
def read_images(df):
    images = []
    for img_name in tqdm(df.image_names.values):
        # defining the image path
        img_path = 'data/emergency/images/' + img_name
        # reading the image
        img = io.imread(img_path)
        # appending the image into the list
        images.append(img)
    return images

images = read_images(df)
print("Total images number: ", len(images))

可以看到,这个数据集中一共包括了1646张图片.

Total images number: 1646

数据集拆分

我们将数据集拆分为训练集测试集, 其中测试集占比10%.

def data_split(images, df):
    x = images
    y = df.emergency_or_not.values
    train_x, val_x, train_y, val_y = train_test_split(x, 
                                                      y, 
                                                      test_size=0.1, 
                                                      random_state= 13,
                                                      stratify=y)
    return train_x, val_x, train_y, val_y

train_x, val_x, train_y, val_y = data_split(images, df)

数据变换和格式转换

我们对图像数据尺寸变换、随机裁剪等。由于后续需要用到VGG、Resnet预训练模型,这些模型对输入数据尺寸要为为3x224x244,因此,我们将数据也转换为该尺寸大小。

# 用于图像转换
data_transforms = {'train': transforms.Compose([
                                transforms.ToTensor(),  # 因为我们用skimage读入图片,图片数据为np.array格式。因此,需要首先转换为tensor。
                                transforms.RandomResizedCrop(224), # 裁剪224 x 224 的尺寸
                                transforms.RandomHorizontalFlip(),
                                transforms.Normalize([0.485, 0.456, 0.406],
                                                     [0.229, 0.224, 0.225]) # 对每个通道做normalize,条件与VGG预训练模型一致。
                            ]),
                   'val': transforms.Compose([
                                transforms.ToTensor(),
                                transforms.Resize(226),
                                transforms.CenterCrop(224),  # 裁剪中间的224x224区域
                                transforms.Normalize([0.485, 0.456, 0.406],
                                                     [0.229, 0.224, 0.225])
                                             ])
                  }

def data_convert(train_x, train_y, val_x, val_y, data_transforms):
    # defining train dataset, test dataset
    train_dataset, val_dataset = [], []
    # obtain train dataset
    for idx, data in enumerate(train_x):
        data = data_transforms['train'](data)
#         data = data.permute(2,0,1)
        label = torch.tensor(train_y[idx], dtype=torch.long)
        train_dataset.append((data, label))
    # obtain val dataset
    for idx, data in enumerate(val_x):
        data = data_transforms['val'](data)
#         data = data.permute(2,0,1)
        label = torch.tensor(val_y[idx], dtype=torch.long)
        val_dataset.append((data, label))
    # 返回dataset格式
    return train_dataset, val_dataset

train_dataset, val_dataset = data_convert(train_x, train_y, val_x, val_y, data_transforms)

采坑点

skimage读入图片的形式为 (H, W, C) 。transforms.ToTensor()会自动将形式转换为 (C, H, W)。此外,如果skimage读入的数据为dtype = np.uint8类型,该函数还会将数据归一化到[0-1] 参考

自定义卷积神经网络模型

我们自定义一个卷积神经网络模型,用该数据集进行训练。得到的模型结果作为baseline,与迁移学习的模型进行比较。

模型定义如下:

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        
        self.cnn_layers = nn.Sequential(
            # defining a 2D convolution layer
            nn.Conv2d(in_channels=3, out_channels=4, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(4),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            # define another 2D convolution layer
            nn.Conv2d(in_channels=4, out_channels=8, kernel_size=3, stride=1, padding=1),
            nn.BatchNorm2d(8),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            )
        
        self.linear_layer = nn.Sequential(
            nn.Flatten(),
            nn.Linear(8*56*56, 2)
            )
        
    def forward(self, x):
        x = self.cnn_layers(x)
        x = self.linear_layer(x)
        return x

模型训练如下:

def train(epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
    
    # defining the model
    model = Net()
    
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        model.train()
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train()

模型训练结果如下:

Epoch 48 | training loss 0.072396 | training acc 0.997299 | val loss 0.833294 | val acc 0.678788
Epoch 49 | training loss 0.069730 | training acc 0.998650 | val loss 0.634375 | val acc 0.666667
Epoch 50 | training loss 0.069672 | training acc 0.997974 | val loss 0.728107 | val acc 0.666667

使用迁移学习构建模型(VGG)

模型构建如下所示:

# loading the pretrained model
model = models.vgg16_bn(pretrained=True)

# Freeze model weights
for param in model.parameters():
    param.requires_grad = False

# checking if GPU is available
if torch.cuda.is_available():
    model = model.cuda()

# repalce last layer with new layer
model.classifier[6] = nn.Sequential(
                        nn.Linear(4096, 2)
                        )

# set last layer
for param in model.classifier[6].parameters():
    param.requires_grad = True

模型训练如下所示:

def train(model, epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train(model=model)

模型训练结果如下:

Epoch 48 | training loss 0.136659 | training acc 0.958136 | val loss 0.160370 | val acc 0.951515
Epoch 49 | training loss 0.134608 | training acc 0.958136 | val loss 0.233730 | val acc 0.951515
Epoch 50 | training loss 0.135072 | training acc 0.954760 | val loss 0.159174 | val acc 0.951515

使用迁移学习构建模型(Resnet)

模型构建如下所示:

model = models.resnet18(pretrained=True)

# Freeze model weights
for param in model.parameters():
    param.requires_grad = False

# checking if GPU is available
if torch.cuda.is_available():
    model = model.cuda()

# repalce last layer with new layer
model.fc = nn.Sequential(
                        nn.Linear(512, 2)
                        )

# set last layer
for param in model.fc.parameters():
    param.requires_grad = True

模型训练如下所示:

def train(model, epochs=50, batch_size=32):
    # obtain dataloader
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
        
    # defining the optimizer
    optimizer = Adam(model.parameters(), lr=0.0001)
    
    # defining the loss function
    criterion = nn.CrossEntropyLoss()
    
    # checking if GPU is availabel
    if torch.cuda.is_available():
        model = model.to("cuda")
        criterion = criterion.to("cuda")
        
    # training
    for epoch in range(1, epochs + 1):
        model.train()
        training_loss = []
        real_labels, pred_labels = [], []
        for data, label in train_loader:
            real_labels += label.detach().tolist()
            data = data.to('cuda')
            label = label.to('cuda')
            pred = model(data)
            loss = criterion(pred, label)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            # record batch loss
            training_loss.append(loss.item())
            # for evaluation
            out = F.softmax(pred, dim=1).cpu().detach().numpy()
            pred_label = np.argmax(out, 1)
            pred_labels += list(pred_label)
            
        # evaluation for train    
        train_acc = metrics.accuracy_score(real_labels, pred_labels)
        train_loss = np.mean(training_loss)
        
        # evaluation for val
        val_loss = []
        real_labels, pred_labels = [], []
        model.eval()
        with torch.no_grad():
            for data, label in val_loader:
                real_labels += label.detach().tolist()
                data = data.to('cuda')
                label = label.to('cuda')
                pred = model(data)
                loss = criterion(pred, label)
                val_loss.append(loss.item())
                out = F.softmax(pred, dim=1).cpu().detach().numpy()
                pred_label = np.argmax(out, 1)
                pred_labels += list(pred_label)
        val_acc = metrics.accuracy_score(real_labels, pred_labels)
        val_loss = np.mean(val_loss)
        
        # print log
        print("Epoch {:3d} | training loss {:5f} | training acc {:5f} | val loss {:5f} | val acc {:5f}".format(
            epoch, train_loss, train_acc, val_loss, val_acc))
    
    return model

model = train(model=model)

模型结果如下所示:

Epoch 48 | training loss 0.218535 | training acc 0.916948 | val loss 0.210073 | val acc 0.921212
Epoch 49 | training loss 0.216172 | training acc 0.916948 | val loss 0.217664 | val acc 0.927273
Epoch 50 | training loss 0.215871 | training acc 0.914247 | val loss 0.210817 | val acc 0.921212

总结

在这个案例中,使用迁移学习能够明显提高模型性能。

  • 2
    点赞
  • 33
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
是的,我可以使用迁移学习进行图像分类迁移学习是一种将已经训练好的模型应用于新的任务中的方法,其中已经训练好的模型称为预训练模型。这种方法可以加快模型的收敛速度和提高模型的准确率。 在图像分类任务中,可以使用预训练的卷积神经网络(CNN)模型来提取图像特征,并将其输入到新的分类器中进行分类。常用的预训练模型有VGGResNet、Inception等。这些模型已经在大规模图像数据集上进行了训练,可以提取出图像中的高层次抽象特征,用于分类任务。 在MATLAB中,可以使用预训练模型进行图像分类。以下是一个简单使用VGG-16模型进行图像分类的示例代码: ```matlab % 加载预训练模型 net = vgg16; % 读取图像并进行预处理 img = imread('image.jpg'); img = imresize(img, net.Layers(1).InputSize(1:2)); img = bsxfun(@minus, img, net.Layers(1).Mean); % 使用预训练模型提取特征 features = activations(net, img, 'fc7'); % 使用支持向量机(SVM)分类进行分类 load(fullfile(matlabroot,'examples','vision','svmclassifier.mat')); [label, score] = predict(svmModel, double(features)); % 显示分类结果 figure; imshow(img); title(sprintf('Prediction: %s, Score: %f', label, max(score))); ``` 需要注意的是,上述示例代码中的SVM分类器需要自己训练或者使用已经训练好的分类器。可以使用MATLAB的训练器训练SVM分类器,也可以使用已经训练好的分类进行分类

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值