第P6周:好莱坞明星识别

🍺要求:

  1. 保存训练过程中的最佳模型权重
  2. 调用官方的VGG-16网络框架

🍻拔高(可选):

  1. 测试集准确率达到60%(难度有点大,但是这个过程可以学到不少)
  2. 手动搭建VGG-16网络框架

🏡 我的环境:

  • 语言环境:Python3.8
  • 编译器:Jupyter Lab
  • 深度学习环境:Pytorch
    • torchvision==0.13.1+cu113
    • torch==1.12.1+cu113

一、 前期准备

1. 设置GPU

如果设备上支持GPU就使用GPU,否则使用CPU

import os
import pathlib
import PIL
import random

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torchvision import datasets, transforms

import warnings
warnings.filterwarnings("ignore")

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device

 输出:(当前设备没有GPU,先用CPU凑合...)

device(type='cpu')

2. 导入数据

获取并查看数据的标签类别名称

data_dir = ''./好莱坞明星识别/''
# 使用pathlib.Path()函数将字符串类型的文件夹路径转换为pathlib.Path对象
data_dir = pathlib.Path(data_dir)
# 使用glob()方法获取data_dir路径下的所有文件路径,并以列表形式存储在data_paths中
data_paths = list(data_dir.glob('*'))
# 通过split()函数对data_paths中的每个文件路径执行分割操作,获得各个文件所属的类别名称,并存储在classeNames中
classeNames = [str(path).split('\\')[1] for path in data_paths]
# 打印classeNames列表,显示每个文件所属的类别名称
classeNames

 输出:      数据集中有以下几种类别

['Angelina Jolie',
 'Brad Pitt',
 'Denzel Washington',
 'Hugh Jackman',
 'Jennifer Lawrence',
 'Johnny Depp',
 'Kate Winslet',
 'Leonardo DiCaprio',
 'Megan Fox',
 'Natalie Portman',
 'Nicole Kidman',
 'Robert Downey Jr',
 'Sandra Bullock',
 'Scarlett Johansson',
 'Tom Cruise',
 'Tom Hanks',
 'Will Smith']
# 关于transforms.Compose的更多介绍可以参考:https://blog.csdn.net/qq_38251616/article/details/124878863
train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    # transforms.RandomHorizontalFlip(), # 随机水平翻转
    transforms.ToTensor(),          # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(           # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406], 
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])

total_data = datasets.ImageFolder(data_dir, transform=train_transforms)
total_data

输出:

Dataset ImageFolder
    Number of datapoints: 1800
    Root location: 好莱坞明星识别
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=None)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )
total_data.class_to_idx

输出:

{'Angelina Jolie': 0,
 'Brad Pitt': 1,
 'Denzel Washington': 2,
 'Hugh Jackman': 3,
 'Jennifer Lawrence': 4,
 'Johnny Depp': 5,
 'Kate Winslet': 6,
 'Leonardo DiCaprio': 7,
 'Megan Fox': 8,
 'Natalie Portman': 9,
 'Nicole Kidman': 10,
 'Robert Downey Jr': 11,
 'Sandra Bullock': 12,
 'Scarlett Johansson': 13,
 'Tom Cruise': 14,
 'Tom Hanks': 15,
 'Will Smith': 16}

 3. 数据可视化

import matplotlib.pyplot as plt
from PIL import Image

image_folder = './好莱坞明星识别/Angelina Jolie/'
image_files = [f for f in os.listdir(image_folder) if f.endswith((".jpg", ".png", ".jpeg"))]
fig, axes = plt.subplots(3, 8, figsize=(16, 6))
for ax, image_file in zip(axes.flat, image_files):
    img_path = os.path.join(image_folder, image_file)
    img = Image.open(img_path)
    ax.imshow(img)
    ax.axis('off')
plt.tight_layout()
plt.show()

 

4. 划分数据集 

训练集数量 1440
测试集数量 360

(<torch.utils.data.dataset.Subset at 0x1b55ac27fd0>,
 <torch.utils.data.dataset.Subset at 0x1b55f43bb50>)

以训练集:测试集=8:2的比例划分数据

train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
print('训练集数量',len(train_dataset))
print('测试集数量',len(test_dataset))
train_dataset, test_dataset

 输出:

训练集数量 1440
测试集数量 360

(<torch.utils.data.dataset.Subset at 0x1b55ac27fd0>,
 <torch.utils.data.dataset.Subset at 0x1b55f43bb50>)

5. 加载训练集和测试集数据

batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True,
                                           num_workers=1)
test_dl = torch.utils.data.DataLoader(test_dataset,
                                          batch_size=batch_size,
                                          shuffle=True,
                                          num_workers=1)
for X, y in test_dl:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break

输出:

Shape of X:  torch.Size([32, 3, 224, 224])
Shape of y:  torch.Size([32]) torch.int64

二、调用官方的VGG-16模型

VGG-16结构说明:

  • 13个卷积层(Convolutional Layer),分别用blockX_convX表示;
  • 3个全连接层(Fully connected Layer),用classifier表示;
  • 5个池化层(Pool layer)。

VGG-16包含了16个隐藏层(13个卷积层和3个全连接层),故称为VGG-16

 

 VGG(2014)网络创新点

 通过堆叠多个3×3卷积核来代替大尺度卷积核 (减少所需参数)因为拥有相同的感受野

  • 可通过堆叠2个3×3卷积核来代替1个5×5卷积核
  • 可通过堆叠3个3×3卷积核来代替1个7×7卷积核

  默认情况下,VGG中conv的stride为1,padding为1;maxpool的size为2,stride为2

1个7×7卷积核参数

  • 7×7×C=49C(C为卷积核深度)

3个3×3卷积核参数

  • 3×3×C+3×3×C+3×3×C=27C(C为卷积核深度) 3个3×3卷积核比1个7×7卷积核参数量少很多

CNN感受野——某一层输出结果中一个元素所对应输入层区域大小,即输出特征图上的一个单元对应输入层区域大小

CNN感受野计算公式:

from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
    
# 加载预训练模型,并且对模型进行微调
model = vgg16(pretrained = True).to(device) # 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False # 冻结模型的参数,这样子在训练的时候只训练最后一层的参数

# 修改classifier模块的第6层(即:(6): Linear(in_features=4096, out_features=2, bias=True))
# 注意查看我们下方打印出来的模型
model.classifier._modules['6'] = nn.Linear(4096,len(classeNames)) # 修改vgg16模型中最后一层全连接层,输出目标类别个数
model.to(device)  

        pretrained = True:将 pretrained=True 传递给 vgg16() 函数时,它会从 torchvision 的模型库中加载已经预训练好的 VGG16 模型权重。这样可以直接使用这个模型进行图像分类任务,而不需要重新训练整个模型。 请注意,如果想对预训练模型进行微调,可以先将其设置为可训练状态(例如,model.train()),然后继续训练它以适应特定任务。 

查看VGG16网络结构

from torchinfo import summary
summary(model)

输出: VGG16模型参数量较大,因此训练起来较慢

=================================================================
Layer (type:depth-idx)                   Param #
=================================================================
VGG                                      --
├─Sequential: 1-1                        --
│    └─Conv2d: 2-1                       (1,792)
│    └─ReLU: 2-2                         --
│    └─Conv2d: 2-3                       (36,928)
│    └─ReLU: 2-4                         --
│    └─MaxPool2d: 2-5                    --
│    └─Conv2d: 2-6                       (73,856)
│    └─ReLU: 2-7                         --
│    └─Conv2d: 2-8                       (147,584)
│    └─ReLU: 2-9                         --
│    └─MaxPool2d: 2-10                   --
│    └─Conv2d: 2-11                      (295,168)
│    └─ReLU: 2-12                        --
│    └─Conv2d: 2-13                      (590,080)
│    └─ReLU: 2-14                        --
│    └─Conv2d: 2-15                      (590,080)
│    └─ReLU: 2-16                        --
│    └─MaxPool2d: 2-17                   --
│    └─Conv2d: 2-18                      (1,180,160)
│    └─ReLU: 2-19                        --
│    └─Conv2d: 2-20                      (2,359,808)
│    └─ReLU: 2-21                        --
│    └─Conv2d: 2-22                      (2,359,808)
│    └─ReLU: 2-23                        --
│    └─MaxPool2d: 2-24                   --
│    └─Conv2d: 2-25                      (2,359,808)
│    └─ReLU: 2-26                        --
│    └─Conv2d: 2-27                      (2,359,808)
│    └─ReLU: 2-28                        --
│    └─Conv2d: 2-29                      (2,359,808)
│    └─ReLU: 2-30                        --
│    └─MaxPool2d: 2-31                   --
├─AdaptiveAvgPool2d: 1-2                 --
├─Sequential: 1-3                        --
│    └─Linear: 2-32                      (102,764,544)
│    └─ReLU: 2-33                        --
│    └─Dropout: 2-34                     --
│    └─Linear: 2-35                      (16,781,312)
│    └─ReLU: 2-36                        --
│    └─Dropout: 2-37                     --
│    └─Linear: 2-38                      69,649
=================================================================
Total params: 134,330,193
Trainable params: 69,649
Non-trainable params: 134,260,544
=================================================================

三、 训练模型

1. 编写训练函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)   # 批次数目, (size/batch_size,向上取整)

    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率
    
    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)
        
        # 计算预测误差
        pred = model(X)          # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
        
        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()        # 反向传播
        optimizer.step()       # 每一步自动更新
        
        # 记录acc与loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
            
    train_acc  /= size
    train_loss /= num_batches

    return train_acc, train_loss

2. 编写测试函数

测试函数和训练函数大致相同,但是由于不进行梯度下降对网络权重进行更新,所以不需要传入优化器

def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)          # 批次数目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0
    
    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)
            
            # 计算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)
            
            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()

    test_acc  /= size
    test_loss /= num_batches

    return test_acc, test_loss

 3. 设置动态学习率

learn_rate = 1e-4
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)

调用官方动态学习率接口

# 每 2 个epoch衰减到原来的 0.98
lambda1 = lambda epoch: 0.98 ** (epoch // 2)
schedulr = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) #选定调整方法

👉调用官方接口示例: 

model = [Parameter(torch.randn(2, 2, requires_grad=True))]
optimizer = SGD(model, 0.1)
scheduler = ExponentialLR(optimizer, gamma=0.9)

for epoch in range(20):
    for input, target in dataset:
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        optimizer.step()
    scheduler.step()

更多的官方动态学习率设置方式可参考:torch.optim — PyTorch 2.1 documentation 

官方接口

1. torch.optim.lr_scheduler.StepLR

等间隔动态调整方法,每经过step_size个epoch,做一次学习率decay,以gamma值为缩小倍数。

函数原型:

torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)

关键参数详解

  • optimizer(Optimizer):是之前定义好的需要优化的优化器的实例名
  • step_size(int):是学习率衰减的周期,每经过step_size个epoch,做一次学习率decay
  • gamma(float):学习率衰减的乘法因子。Default:0.1

用法示例:

optimizer = torch.optim.SGD(net.parameters(), lr=0.001 )
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
2. lr_scheduler.LambdaLR

根据自己定义的函数更新学习率。

函数原型

torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False)

关键参数详解

  • optimizer(Optimizer):是之前定义好的需要优化的优化器的实例名
  • lr_lambda(function):更新学习率的函数

用法示例:

lambda1 = lambda epoch: (0.92 ** (epoch // 2) # 第二组参数的调整方法
optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) #选定调整方法

3. lr_scheduler.MultiStepLR


在特定的 epoch 中调整学习率

函数原型:

 torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False)

关键参数详解:

●optimizer(Optimizer):是之前定义好的需要优化的优化器的实例名
●milestones(list):是一个关于epoch数值的list,表示在达到哪个epoch范围内开始变化,必须是升序排列
●gamma(float):学习率衰减的乘法因子。Default:0.1

 用法示例:

optimizer = torch.optim.SGD(net.parameters(), lr=0.001 )
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, 
                                                 milestones=[2,6,15], #调整学习率的epoch数
                                                 gamma=0.1)

4. 正式训练

import copy

epoches = 40
train_acc = []
train_loss = []
test_acc = []
test_loss = []

best_acc = 0 # 设置一个最佳准确率,作为最佳模型的判别指标

for epoch in range(epoches):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    schedulr.step() # 更新学习率(调用官方动态学习率接口时使用)
 
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    # 保存最佳模型到 best_model
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_test_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

    # 获取当前的学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']

    template = ('Epoches: {:2d}, Train_acc: {:.1f}%, Train_loss: {:.3f}, Test_acc: {:.1f}%, Test_loss: {:.3f}, Lr: {:.2E}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss, lr))

# 保存最佳模型到文件中
PATH = './beat_model.pth' # 保存的参数文件名
torch.save(best_model.state_dict(), PATH)
print('Done')

# 加载模型
# model.load_state_dict(torch.load(PATH, map_location=device))

 输出:

Epoches:  1, Train_acc: 7.7%, Train_loss: 2.918, Test_acc: 8.3%, Test_loss: 2.807, Lr: 1.00E-04
Epoches:  2, Train_acc: 7.4%, Train_loss: 2.876, Test_acc: 11.4%, Test_loss: 2.773, Lr: 9.80E-05
Epoches:  3, Train_acc: 8.4%, Train_loss: 2.840, Test_acc: 11.9%, Test_loss: 2.753, Lr: 9.80E-05
Epoches:  4, Train_acc: 11.5%, Train_loss: 2.813, Test_acc: 13.3%, Test_loss: 2.736, Lr: 9.60E-05
Epoches:  5, Train_acc: 11.0%, Train_loss: 2.793, Test_acc: 15.0%, Test_loss: 2.711, Lr: 9.60E-05
Epoches:  6, Train_acc: 12.0%, Train_loss: 2.766, Test_acc: 15.6%, Test_loss: 2.693, Lr: 9.41E-05
Epoches:  7, Train_acc: 11.9%, Train_loss: 2.751, Test_acc: 16.7%, Test_loss: 2.667, Lr: 9.41E-05
Epoches:  8, Train_acc: 12.3%, Train_loss: 2.715, Test_acc: 16.9%, Test_loss: 2.646, Lr: 9.22E-05
Epoches:  9, Train_acc: 12.7%, Train_loss: 2.711, Test_acc: 17.5%, Test_loss: 2.646, Lr: 9.22E-05
Epoches: 10, Train_acc: 14.3%, Train_loss: 2.689, Test_acc: 18.1%, Test_loss: 2.619, Lr: 9.04E-05
Epoches: 11, Train_acc: 15.0%, Train_loss: 2.668, Test_acc: 18.3%, Test_loss: 2.624, Lr: 9.04E-05
Epoches: 12, Train_acc: 15.1%, Train_loss: 2.665, Test_acc: 18.9%, Test_loss: 2.604, Lr: 8.86E-05
Epoches: 13, Train_acc: 16.3%, Train_loss: 2.627, Test_acc: 18.3%, Test_loss: 2.596, Lr: 8.86E-05
Epoches: 14, Train_acc: 16.3%, Train_loss: 2.632, Test_acc: 18.3%, Test_loss: 2.576, Lr: 8.68E-05
Epoches: 15, Train_acc: 17.3%, Train_loss: 2.596, Test_acc: 18.6%, Test_loss: 2.552, Lr: 8.68E-05
Epoches: 16, Train_acc: 18.1%, Train_loss: 2.579, Test_acc: 18.3%, Test_loss: 2.568, Lr: 8.51E-05
Epoches: 17, Train_acc: 18.4%, Train_loss: 2.576, Test_acc: 18.3%, Test_loss: 2.520, Lr: 8.51E-05
Epoches: 18, Train_acc: 18.4%, Train_loss: 2.566, Test_acc: 18.9%, Test_loss: 2.525, Lr: 8.34E-05
Epoches: 19, Train_acc: 16.7%, Train_loss: 2.563, Test_acc: 18.9%, Test_loss: 2.521, Lr: 8.34E-05
Epoches: 20, Train_acc: 17.2%, Train_loss: 2.544, Test_acc: 18.9%, Test_loss: 2.507, Lr: 8.17E-05
Epoches: 21, Train_acc: 19.3%, Train_loss: 2.537, Test_acc: 19.2%, Test_loss: 2.521, Lr: 8.17E-05
Epoches: 22, Train_acc: 18.8%, Train_loss: 2.523, Test_acc: 19.2%, Test_loss: 2.510, Lr: 8.01E-05
Epoches: 23, Train_acc: 19.0%, Train_loss: 2.525, Test_acc: 19.2%, Test_loss: 2.511, Lr: 8.01E-05
Epoches: 24, Train_acc: 18.7%, Train_loss: 2.506, Test_acc: 19.4%, Test_loss: 2.478, Lr: 7.85E-05
Epoches: 25, Train_acc: 19.6%, Train_loss: 2.495, Test_acc: 19.4%, Test_loss: 2.481, Lr: 7.85E-05
Epoches: 26, Train_acc: 19.9%, Train_loss: 2.497, Test_acc: 19.4%, Test_loss: 2.448, Lr: 7.69E-05
Epoches: 27, Train_acc: 18.7%, Train_loss: 2.486, Test_acc: 19.4%, Test_loss: 2.470, Lr: 7.69E-05
Epoches: 28, Train_acc: 21.3%, Train_loss: 2.456, Test_acc: 19.7%, Test_loss: 2.461, Lr: 7.54E-05
Epoches: 29, Train_acc: 19.3%, Train_loss: 2.463, Test_acc: 20.0%, Test_loss: 2.442, Lr: 7.54E-05
Epoches: 30, Train_acc: 20.6%, Train_loss: 2.443, Test_acc: 20.0%, Test_loss: 2.449, Lr: 7.39E-05
Epoches: 31, Train_acc: 19.7%, Train_loss: 2.455, Test_acc: 20.0%, Test_loss: 2.443, Lr: 7.39E-05
Epoches: 32, Train_acc: 20.3%, Train_loss: 2.430, Test_acc: 20.3%, Test_loss: 2.421, Lr: 7.24E-05
Epoches: 33, Train_acc: 21.9%, Train_loss: 2.427, Test_acc: 20.0%, Test_loss: 2.424, Lr: 7.24E-05
Epoches: 34, Train_acc: 20.8%, Train_loss: 2.413, Test_acc: 20.3%, Test_loss: 2.440, Lr: 7.09E-05
Epoches: 35, Train_acc: 21.2%, Train_loss: 2.435, Test_acc: 20.6%, Test_loss: 2.405, Lr: 7.09E-05
Epoches: 36, Train_acc: 23.0%, Train_loss: 2.392, Test_acc: 20.6%, Test_loss: 2.409, Lr: 6.95E-05
Epoches: 37, Train_acc: 20.7%, Train_loss: 2.417, Test_acc: 20.8%, Test_loss: 2.409, Lr: 6.95E-05
Epoches: 38, Train_acc: 21.9%, Train_loss: 2.410, Test_acc: 21.9%, Test_loss: 2.407, Lr: 6.81E-05
Epoches: 39, Train_acc: 21.0%, Train_loss: 2.396, Test_acc: 22.2%, Test_loss: 2.403, Lr: 6.81E-05
Epoches: 40, Train_acc: 20.3%, Train_loss: 2.399, Test_acc: 22.2%, Test_loss: 2.385, Lr: 6.68E-05
Done

可以发现训练集和测试集上准确率都不高,说明模型拟合能力弱,待后续优化

四、 结果可视化

1. Loss与Accuracy图

import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False      # 用来正常显示负号
plt.rcParams['figure.dpi']         = 100        #分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

2. 指定图片进行预测

from PIL import Image
classes = list(total_data.class_to_idx)

def predict_one_image(image_path, model, transform, classes):
    test_img = Image.open(image_path)
    plt.imshow(test_img) # 展示预测的图片
    test_img = transform(test_img)
    test_img = test_img.to(device).unsqueeze(0)

    model.eval()
    output = model(test_img)
    _, pred = torch.max(output, 1)
    pred_class = classes[pred]
    img_name = str(image_path).split('\\')[-1]
    print(f'图片{img_name}的预测结果是: {pred_class}')

预测单张图片 

# 预测训练集中的某张照片
predict_one_image(image_path='./好莱坞明星识别/Johnny Depp/005_9406f32d.jpg', 
                  model=model, 
                  transform=train_transforms, 
                  classes=classes)

输出:

图片./好莱坞明星识别/Johnny Depp/005_9406f32d.jpg的预测结果是: Brad Pitt

预测多张图片 

data_paths = './好莱坞明星识别/Johnny Depp/'
data_paths = pathlib.Path(data_paths)
test_imgs = [path for path in data_paths.glob('*')]
for i in range(10):
    predict_one_image(test_imgs[i], best_model, train_transforms, classes)

输出:

图片001_2288a4f6.jpg的预测结果是: Johnny Depp
图片002_331d0423.jpg的预测结果是: Brad Pitt
图片003_64926b97.jpg的预测结果是: Brad Pitt
图片004_18e08ab4.jpg的预测结果是: Johnny Depp
图片005_9406f32d.jpg的预测结果是: Brad Pitt
图片006_8fc31fd7.jpg的预测结果是: Robert Downey Jr
图片007_1bc0bcd6.jpg的预测结果是: Leonardo DiCaprio
图片008_35d1be70.jpg的预测结果是: Robert Downey Jr
图片009_f4a38fec.jpg的预测结果是: Tom Cruise
图片010_610eea60.jpg的预测结果是: Leonardo DiCaprio

 3. 模型评估

best_model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)

epoch_test_acc, epoch_test_loss

 输出:

 (0.2222222222222222, 2.3882540265719094)
# 查看是否与我们记录的最高准确率一致
epoch_test_acc

输出:

0.2222222222222222

五、 模型优化

使测试集准确率达到60%

        经过分析,原代码训练集准确率不高,模型欠拟合,且测试集准确率虽然缓慢上升,但迭代速度很慢,因此尝试提高初始学习率为0.005(玄学调参),测试集准确率达到45%左右,但会在迭代十几次收敛。

        尝试解决过拟合问题,主要做以下调整:

        1.全连接层后添加BN层(效果较好)

        2.dropout=0.4,略微提升,但不大

        3.减少全连接层参数 (有效)

        4.调整学习率1e-3 

        经过以上调整测试集精度达到58%,离60%的目标还有一点距离,之后会继续尝试新方法。

        综合来看,调参效果学习率 > 网络结构 。原因可能是预训练参数不适合当前任务所致。

from torchvision.models import vgg16

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))

# 加载预训练模型,并且对模型进行微调
model = vgg16(pretrained=True).to(device)  # 加载预训练的vgg16模型

for param in model.parameters():
    param.requires_grad = False  # 冻结模型的参数,这样子在训练的时候只训练最后一层的参数


model.classifier = nn.Sequential(
    nn.Linear(512 * 7 * 7, 1024),
    nn.BatchNorm1d(1024),
    nn.Dropout(0.4),
    nn.Linear(1024, 128),
    nn.BatchNorm1d(128),
    nn.Dropout(0.4),
    nn.Linear(128, len(classeNames)),
    nn.Softmax()
)
model.to(device)
model
learn_rate = 1e-3 # 初始学习率
lambda1 = lambda epoch: 0.92 ** (epoch // 4)
optimizer = torch.optim.Adam(model.parameters(), lr=learn_rate)
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1) #选定调整方法

输出效果:(VGG16参数量大,训练较慢,可考虑租用GPU提供算力)

Epoch: 1, Train_acc:24.7%, Train_loss:2.694, Test_acc:30.6%, Test_loss:2.630, Lr:1.00E-03
Epoch: 2, Train_acc:57.8%, Train_loss:2.445, Test_acc:46.1%, Test_loss:2.562, Lr:1.00E-03
Epoch: 3, Train_acc:80.0%, Train_loss:2.224, Test_acc:50.8%, Test_loss:2.510, Lr:1.00E-03
Epoch: 4, Train_acc:91.7%, Train_loss:2.073, Test_acc:53.3%, Test_loss:2.455, Lr:9.20E-04
Epoch: 5, Train_acc:97.1%, Train_loss:1.995, Test_acc:56.7%, Test_loss:2.471, Lr:9.20E-04
Epoch: 6, Train_acc:98.8%, Train_loss:1.964, Test_acc:55.6%, Test_loss:2.454, Lr:9.20E-04
Epoch: 7, Train_acc:99.5%, Train_loss:1.947, Test_acc:53.3%, Test_loss:2.456, Lr:9.20E-04
Epoch: 8, Train_acc:99.8%, Train_loss:1.941, Test_acc:56.4%, Test_loss:2.445, Lr:8.46E-04
Epoch: 9, Train_acc:99.9%, Train_loss:1.937, Test_acc:56.7%, Test_loss:2.454, Lr:8.46E-04
Epoch:10, Train_acc:99.9%, Train_loss:1.935, Test_acc:56.4%, Test_loss:2.417, Lr:8.46E-04
Epoch:11, Train_acc:99.9%, Train_loss:1.933, Test_acc:57.5%, Test_loss:2.418, Lr:8.46E-04
Epoch:12, Train_acc:100.0%, Train_loss:1.933, Test_acc:56.1%, Test_loss:2.433, Lr:7.79E-04
Epoch:13, Train_acc:100.0%, Train_loss:1.932, Test_acc:55.8%, Test_loss:2.420, Lr:7.79E-04
Epoch:14, Train_acc:100.0%, Train_loss:1.931, Test_acc:56.7%, Test_loss:2.434, Lr:7.79E-04
Epoch:15, Train_acc:100.0%, Train_loss:1.931, Test_acc:57.5%, Test_loss:2.405, Lr:7.79E-04
Epoch:16, Train_acc:100.0%, Train_loss:1.931, Test_acc:57.5%, Test_loss:2.396, Lr:7.16E-04
Epoch:17, Train_acc:100.0%, Train_loss:1.930, Test_acc:56.9%, Test_loss:2.415, Lr:7.16E-04
Epoch:18, Train_acc:100.0%, Train_loss:1.930, Test_acc:57.5%, Test_loss:2.411, Lr:7.16E-04
Epoch:19, Train_acc:100.0%, Train_loss:1.931, Test_acc:55.6%, Test_loss:2.407, Lr:7.16E-04
Epoch:20, Train_acc:100.0%, Train_loss:1.930, Test_acc:56.9%, Test_loss:2.421, Lr:6.59E-04
Epoch:21, Train_acc:100.0%, Train_loss:1.930, Test_acc:57.5%, Test_loss:2.405, Lr:6.59E-04
Epoch:22, Train_acc:100.0%, Train_loss:1.930, Test_acc:56.9%, Test_loss:2.413, Lr:6.59E-04
Epoch:23, Train_acc:100.0%, Train_loss:1.930, Test_acc:59.2%, Test_loss:2.385, Lr:6.59E-04
Epoch:24, Train_acc:100.0%, Train_loss:1.930, Test_acc:57.2%, Test_loss:2.415, Lr:6.06E-04
Epoch:25, Train_acc:100.0%, Train_loss:1.930, Test_acc:58.3%, Test_loss:2.410, Lr:6.06E-04
Epoch:26, Train_acc:100.0%, Train_loss:1.930, Test_acc:57.8%, Test_loss:2.412, Lr:6.06E-04
Epoch:27, Train_acc:100.0%, Train_loss:1.930, Test_acc:58.1%, Test_loss:2.407, Lr:6.06E-04
Epoch:28, Train_acc:100.0%, Train_loss:1.930, Test_acc:58.1%, Test_loss:2.410, Lr:5.58E-04

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值