深度学习训练camp-第P4周:猴痘病识别

  • 语言环境:python3.12
  • 编译器:Jupyter Lab
  • 深度学习环境:pytorch

一、前期准备

1、设置GPU

import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets

import os,PIL,pathlib

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

device

代码输出:

device(type='cuda')

2、导入数据

我将这次的数据集放在notebook同一级文件夹data的第四周中,如下所示:
在这里插入图片描述
data进入后如下所示
在这里插入图片描述
所以在导入数据时使用如下代码:

import os,PIL,random,pathlib

data_dir = './data/第四周/'
data_dir = pathlib.Path(data_dir) #将字符串类型的 data_dir 路径转换为一个 Path 对象

data_paths = list(data_dir.glob('*'))
classeNames = [path.name for path in data_paths]
classeNames

这次的分为两类:“Monkeypox”和“Others”
代码输出:

['Monkeypox', 'Others']

上周已经介绍过transform.Compose,今天我们直接使用来进行数据的标准化处理:

total_datadir = './data/第四周/'

train_transforms = transforms.Compose([
    transforms.Resize([224,224]),   #将输入图片resize成统一尺寸
    transforms.ToTensor(),          #将PIL Image或nump.ndarray转换为tensor
    transforms.Normalize(           #标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean = [0.485, 0.456, 0.406],
        std = [0.229, 0.224, 0.225]  #其中其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
    )
])

total_data = datasets.ImageFolder(total_datadir, transform = train_transforms)
total_data

代码输出:

Dataset ImageFolder
    Number of datapoints: 2142
    Root location: ./data/第四周/
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=True)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )

接下来我们看看分类与索引之间的关系:

total_data.class_to_idx

代码输出:

{'Monkeypox': 0, 'Others': 1}

total_data.class_to_idx是一个储存了数据集类别和对应索引的字典。在pytorch的Imagefolder数据加载器中,根据数据集文件夹的组织结构,每个文件夹代表一个类别,class_to_idx字典将每个类别名称映射为一个数字索引。
具体来说,如果数据集文件夹中包含两个子文件夹,比如我们这里的Monkeypox和Others,class_to_idx字典将返回以下的以下的映射关系:{‘Monkeypox’: 0, ‘Others’: 1}。

3、划分数据集

我们接下来将total_data划分为训练集和测试集,其中80%的数据用于训练,20%的数据用于测试。

train_size = int(0.8 * len(total_data))
test_size  = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
train_dataset, test_dataset

代码输出:

(<torch.utils.data.dataset.Subset at 0x25e30516c30>,
 <torch.utils.data.dataset.Subset at 0x25e30516540>)

我们看训练集和测试集的数据量:

train_size,test_size

代码输出:

(1713, 429)

接下来我们设置基本参数:

batch_size = 32

train_dl = torch.utils.data.DataLoader(train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True,
                                           num_workers=4)
test_dl = torch.utils.data.DataLoader(test_dataset,
                                          batch_size=batch_size,
                                          shuffle=True,
                                          num_workers=4)

for X, y in test_dl:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break

代码输出:

Shape of X [N, C, H, W]:  torch.Size([32, 3, 224, 224])
Shape of y:  torch.Size([32]) torch.int64

torch.Size([32])代表每个批次有32个对应的标签。标签的数据类型为torch.int64

二、构建简单的CNN网络

首先看网络结构:
在这里插入图片描述
所以我们可以把模型完整的写出来:

import torch.nn.functional as F

class Network_bn(nn.Module):
    def __init__(self):
        super(Network_bn, self).__init__()
        self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 12, kernel_size = 5, stride = 1, padding = 0)
        self.bn1 = nn.BatchNorm2d(12) #批归一化:主要的作用是在训练过程中,稳定和加速训练,通过对每一个mini-batch的数据进行归一化,减小输入分布的变化
        self.conv2 = nn.Conv2d(in_channels = 12, out_channels = 12, kernel_size = 5, stride = 1, padding = 0)
        self.bn2 = nn.BatchNorm2d(12)
        self.pool = nn.MaxPool2d(2,2)
        self.conv3 = nn.Conv2d(in_channels = 12, out_channels = 24, kernel_size = 5, stride = 1, padding = 0)
        self.bn3 = nn.BatchNorm2d(24)
        self.conv4 = nn.Conv2d(in_channels = 24, out_channels = 24, kernel_size = 5, stride = 1, padding = 0)
        self.bn4 = nn.BatchNorm2d(24)
        self.fc1 = nn.Linear(24*50*50,len(classeNames))

    def forward(self,x):
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = self.pool(x)
        x = F.relu(self.bn3(self.conv3(x)))
        x = F.relu(self.bn4(self.conv4(x)))
        x = self.pool(x)
        x = x.view(-1, 24*50*50)
        x = self.fc1(x)

        return x

device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))

model = Network_bn().to(device)
model

代码输出:

Using cuda device
Network_bn(
  (conv1): Conv2d(3, 12, kernel_size=(5, 5), stride=(1, 1))
  (bn1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (conv2): Conv2d(12, 12, kernel_size=(5, 5), stride=(1, 1))
  (bn2): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv3): Conv2d(12, 24, kernel_size=(5, 5), stride=(1, 1))
  (bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (conv4): Conv2d(24, 24, kernel_size=(5, 5), stride=(1, 1))
  (bn4): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (fc1): Linear(in_features=60000, out_features=2, bias=True)
)

三、训练模型

1、设置超参数

loss_fn    = nn.CrossEntropyLoss() # 创建损失函数
learn_rate = 1e-4 # 学习率
opt        = torch.optim.SGD(model.parameters(),lr=learn_rate) # 设置优化函数SGD

2、编写训练函数

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)

    train_loss, train_acc = 0, 0

    for X, y in dataloader:
        X, y = X.to(device), y.to(device)

        #计算预测误差
        pred = model(X)
        loss = loss_fn(pred, y)

        #反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        #记录acc与loss
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()
    train_acc /= size
    train_loss /= num_batches

    return train_acc, train_loss

我们这次来分析这个训练函数:

首先定义训练函数以及参数:

train:训练函数,负责执行一次完整的训练过程(一个epoch)。该函数接收四个参数:
dataloader:这是pytorch的Dataloader对象,负责将数据集分成小批量(mini-batches),并在训练的过程中逐步给他们提供模型。
model:神经网络模型,定义前向传播的结构,接收输入并输出预测值。
loss_fn:损失函数,用于计算模型预测和实际标签之间的误差。
optimizer:优化器,用于更新模型的参数,是损失最小化。

然后计算数据集大小和批次数量:

size:数据集中样本的总数。dataloader.dataset 是加载的数据集对象,len(dataloader.dataset) 计算数据集的大小。
num_batches:批次的数量,即 dataloader 中批量的个数。len(dataloader) 返回 DataLoader 中的批次数(也就是数据集总大小除以每个批次的大小)。

随后是初始化损失和准确率:

train_loss和train_acc:初始化训练过程中的总损失和总准确率。

遍历dataloader:

for X, y in dataloader:遍历 DataLoader 中的每一个批次数据。
X 是输入数据(通常是图像或特征矩阵)。
y 是对应的标签(通常是分类任务中的类别标签)。
每次迭代时,从 dataloader 中返回一小批数据(mini-batch),而不是整个数据集。
并将数据转移到GPU:
X.to(device) 和 y.to(device):将输入数据 X 和标签 y 移动到指定的设备上(CPU 或 GPU)。device 是由之前的代码确定的设备(可能是 “cuda” 或 “cpu”)。这是为了确保数据和模型在同一个设备上执行运算。

随后向前传播并计算损失:

pred = model(X):对当前批次的数据 X 进行前向传播,得到模型的预测输出 pred。model(X) 调用模型的 forward 方法(定义模型的前向传播逻辑)。

loss = loss_fn(pred, y):计算模型预测结果 pred 和真实标签 y 之间的误差(损失)。loss_fn 是损失函数(例如交叉熵损失),pred 是模型的输出,y 是目标标签。

然后反向传播并更新参数:

optimizer.zero_grad():在每次参数更新之前,将之前计算的梯度清零。因为 PyTorch 中梯度是累加的,因此在进行反向传播之前需要将梯度清零。

loss.backward():计算当前批次的损失对模型参数的梯度。即,执行反向传播,计算出损失对每个参数的梯度。

optimizer.step():使用优化器更新模型参数。通过 optimizer.step(),根据刚才计算的梯度对模型的参数进行调整(例如使用随机梯度下降或 Adam 算法等)。

然后记录准确率和损失:

pred.argmax(1):pred 是模型输出的预测值,通常是一个概率分布,argmax(1) 返回在每个样本中预测得分最高的类别(即模型认为的预测类别)。1 表示在第一个维度上(批次大小)取最大值。

pred.argmax(1) == y:将模型的预测类别与实际的标签 y 进行比较,返回一个布尔值张量,表示每个样本的预测是否正确。

.type(torch.float):将布尔张量转换为浮点数张量(True 为 1.0,False 为 0.0)。

.sum().item():将批次中预测正确的样本个数累加到 train_acc 中。sum() 计算正确预测的样本数,.item() 将结果转换为 Python 标量。

train_loss += loss.item():将当前批次的损失(标量形式)累加到 train_loss 中。loss.item() 将 PyTorch 张量转换为 Python 标量

然后计算平均准确率和损失

train_acc /= size:将累加的准确率除以数据集的总大小 size,得到整个 epoch 的平均准确率。

train_loss /= num_batches:将累加的损失除以批次数 num_batches,得到整个 epoch 的平均损失。

3、编写测试函数

测试函数和训练函数大致相同,但是不进行反向传播和权重更新,所以不需要传入优化器:

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, test_acc = 0, 0

    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)

            target_pred = model(imgs)
            loss = loss_fn(target_pred, target)

            test_loss += loss.item()
            test_acc +=(target_pred.argmax(1) == target).type(torch.float).sum().item()

        test_acc /= size
        test_loss /= num_batches

        return test_acc, test_loss

4、正式训练

接下来就是正式训练,会相对来说比较简单:

epochs = 20
train_loss = []
train_acc = []
test_loss = []
test_acc = []

for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, opt)

    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%,Test_loss:{:.3f}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
print('Done') 

代码输出:

Epoch: 1, Train_acc:60.0%, Train_loss:0.699, Test_acc:64.8%,Test_loss:0.619
Epoch: 2, Train_acc:69.4%, Train_loss:0.578, Test_acc:66.0%,Test_loss:0.640
Epoch: 3, Train_acc:74.8%, Train_loss:0.523, Test_acc:70.9%,Test_loss:0.605
Epoch: 4, Train_acc:75.5%, Train_loss:0.494, Test_acc:73.7%,Test_loss:0.565
Epoch: 5, Train_acc:78.6%, Train_loss:0.461, Test_acc:74.8%,Test_loss:0.507
Epoch: 6, Train_acc:82.4%, Train_loss:0.424, Test_acc:76.5%,Test_loss:0.486
Epoch: 7, Train_acc:82.8%, Train_loss:0.411, Test_acc:78.8%,Test_loss:0.476
Epoch: 8, Train_acc:85.3%, Train_loss:0.387, Test_acc:73.0%,Test_loss:0.515
Epoch: 9, Train_acc:85.7%, Train_loss:0.367, Test_acc:80.0%,Test_loss:0.460
Epoch:10, Train_acc:87.0%, Train_loss:0.350, Test_acc:76.2%,Test_loss:0.477
Epoch:11, Train_acc:88.8%, Train_loss:0.333, Test_acc:80.7%,Test_loss:0.459
Epoch:12, Train_acc:88.6%, Train_loss:0.327, Test_acc:80.4%,Test_loss:0.432
Epoch:13, Train_acc:89.7%, Train_loss:0.320, Test_acc:81.8%,Test_loss:0.425
Epoch:14, Train_acc:90.8%, Train_loss:0.297, Test_acc:83.0%,Test_loss:0.412
Epoch:15, Train_acc:90.4%, Train_loss:0.289, Test_acc:83.7%,Test_loss:0.407
Epoch:16, Train_acc:91.0%, Train_loss:0.284, Test_acc:82.3%,Test_loss:0.405
Epoch:17, Train_acc:91.3%, Train_loss:0.273, Test_acc:80.2%,Test_loss:0.419
Epoch:18, Train_acc:91.6%, Train_loss:0.268, Test_acc:82.8%,Test_loss:0.408
Epoch:19, Train_acc:93.3%, Train_loss:0.256, Test_acc:81.1%,Test_loss:0.411
Epoch:20, Train_acc:93.2%, Train_loss:0.244, Test_acc:83.9%,Test_loss:0.384
Done

训练集准确率达到93.2%,测试集的准确率达到83.9%。效果还不错

四、结果可视化

1、Loss与Accuracy图

import matplotlib.pyplot as plt
import warnings

warnings.filterwarnings("ignore") #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False      # 用来正常显示负号
plt.rcParams['figure.dpi']         = 300        #分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

代码输出:
在这里插入图片描述

2、指定图片进行预测

from PIL import Image 

classes = list(total_data.class_to_idx) #通过 total_data.class_to_idx 获取数据集中类别名称到类别索引的映射,将其转换为一个列表,以便通过索引获取类别名称。

def predict_one_image(image_path, model, transform, classes):
    
    test_img = Image.open(image_path).convert('RGB') #使用PIL.Image打开指定路径的图像,并将图像转换为GRB格式,以确保图像具有三通道(即使是灰度图)
    #plt.imshow(test_img)  # 展示预测的图片

    test_img = transform(test_img)
    img = test_img.to(device).unsqueeze(0)
    
    model.eval()
    output = model(img)

    _,pred = torch.max(output,1) # 在第一维上获取最大值,返回两个张量,最大值和对应的索引
    pred_class = classes[pred]
    print(f'预测结果是:{pred_class}')


# 预测训练集中的某张照片
predict_one_image(image_path='./data/第四周/Monkeypox/M01_01_00.jpg', 
                  model=model, 
                  transform=train_transforms, 
                  classes=classes)

预测一样训练集中的照片:

预测结果是:Monkeypox

五、保存并加载模型

# 模型保存
PATH = './model.pth'  # 保存的参数文件名
torch.save(model.state_dict(), PATH)

# 将参数加载到model当中
model.load_state_dict(torch.load(PATH, map_location=device))

代码输出:

<All keys matched successfully>

六、总结

这次我们使用简单的CNN模型对猴痘病进行预测,实际上与之前的预测方法类似,我们都是构建卷积神经网络后对网络进行训练,我们首先对数据进行了标准化的处理后,随后构建模型,然后训练模型,最后进行了预测。
最后我们使用动态学习率对模型进行优化:

scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.2)


for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, opt)
    scheduler.step()

    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%,Test_loss:{:.3f}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
print('Done')   

但是输出:

Epoch: 1, Train_acc:64.7%, Train_loss:0.654, Test_acc:69.5%,Test_loss:0.592
Epoch: 2, Train_acc:70.6%, Train_loss:0.578, Test_acc:72.7%,Test_loss:0.556
Epoch: 3, Train_acc:74.2%, Train_loss:0.524, Test_acc:73.2%,Test_loss:0.506
Epoch: 4, Train_acc:75.2%, Train_loss:0.493, Test_acc:76.0%,Test_loss:0.490
Epoch: 5, Train_acc:78.6%, Train_loss:0.463, Test_acc:76.9%,Test_loss:0.485
Epoch: 6, Train_acc:79.6%, Train_loss:0.450, Test_acc:77.9%,Test_loss:0.455
Epoch: 7, Train_acc:81.7%, Train_loss:0.430, Test_acc:78.1%,Test_loss:0.450
Epoch: 8, Train_acc:82.1%, Train_loss:0.415, Test_acc:76.9%,Test_loss:0.461
Epoch: 9, Train_acc:82.7%, Train_loss:0.405, Test_acc:79.3%,Test_loss:0.443
Epoch:10, Train_acc:84.2%, Train_loss:0.396, Test_acc:78.8%,Test_loss:0.443
Epoch:11, Train_acc:84.4%, Train_loss:0.388, Test_acc:80.4%,Test_loss:0.426
Epoch:12, Train_acc:84.3%, Train_loss:0.387, Test_acc:80.9%,Test_loss:0.431
Epoch:13, Train_acc:86.5%, Train_loss:0.375, Test_acc:80.4%,Test_loss:0.430
Epoch:14, Train_acc:85.3%, Train_loss:0.376, Test_acc:80.2%,Test_loss:0.425
Epoch:15, Train_acc:85.8%, Train_loss:0.373, Test_acc:80.4%,Test_loss:0.420
Epoch:16, Train_acc:87.0%, Train_loss:0.364, Test_acc:80.9%,Test_loss:0.412
Epoch:17, Train_acc:86.9%, Train_loss:0.361, Test_acc:80.4%,Test_loss:0.418
Epoch:18, Train_acc:86.9%, Train_loss:0.357, Test_acc:80.4%,Test_loss:0.417
Epoch:19, Train_acc:87.0%, Train_loss:0.356, Test_acc:80.9%,Test_loss:0.430
Epoch:20, Train_acc:87.1%, Train_loss:0.354, Test_acc:80.7%,Test_loss:0.415
Done

效果还不如固定学习率,这是为什么呢?可能是学习率设置的问题,也可能使我们的动态学习率方法有问题,我们更改方法试试:

scheduler = optim.lr_scheduler.StepLR(opt, step_size=5, gamma=0.1)

代码输出:

Epoch: 1, Train_acc:56.1%, Train_loss:2.119, Test_acc:72.7%,Test_loss:0.686
Epoch: 2, Train_acc:67.3%, Train_loss:0.988, Test_acc:58.0%,Test_loss:1.922
Epoch: 3, Train_acc:77.9%, Train_loss:0.550, Test_acc:82.3%,Test_loss:0.519
Epoch: 4, Train_acc:80.4%, Train_loss:0.473, Test_acc:80.0%,Test_loss:0.486
Epoch: 5, Train_acc:81.1%, Train_loss:0.487, Test_acc:79.7%,Test_loss:0.493
Epoch: 6, Train_acc:89.7%, Train_loss:0.265, Test_acc:86.5%,Test_loss:0.377
Epoch: 7, Train_acc:91.4%, Train_loss:0.240, Test_acc:86.5%,Test_loss:0.368
Epoch: 8, Train_acc:92.1%, Train_loss:0.226, Test_acc:86.5%,Test_loss:0.353
Epoch: 9, Train_acc:91.6%, Train_loss:0.231, Test_acc:87.4%,Test_loss:0.369
Epoch:10, Train_acc:92.3%, Train_loss:0.224, Test_acc:87.6%,Test_loss:0.359
Epoch:11, Train_acc:92.1%, Train_loss:0.210, Test_acc:87.6%,Test_loss:0.352
Epoch:12, Train_acc:92.4%, Train_loss:0.220, Test_acc:87.6%,Test_loss:0.373
Epoch:13, Train_acc:92.5%, Train_loss:0.210, Test_acc:86.7%,Test_loss:0.349
Epoch:14, Train_acc:92.3%, Train_loss:0.215, Test_acc:87.2%,Test_loss:0.361
Epoch:15, Train_acc:92.3%, Train_loss:0.212, Test_acc:87.4%,Test_loss:0.355
Epoch:16, Train_acc:91.7%, Train_loss:0.217, Test_acc:86.7%,Test_loss:0.349
Epoch:17, Train_acc:92.6%, Train_loss:0.211, Test_acc:87.9%,Test_loss:0.379
Epoch:18, Train_acc:92.5%, Train_loss:0.214, Test_acc:87.6%,Test_loss:0.363
Epoch:19, Train_acc:92.2%, Train_loss:0.217, Test_acc:88.1%,Test_loss:0.361
Epoch:20, Train_acc:92.5%, Train_loss:0.214, Test_acc:88.1%,Test_loss:0.358
Done

这次的效果好很多,但是测试集的准确率仍然没有到达90%!之后再尝试~

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值