保存与加载PyTorch训练的模型和超参数--基于kaggle树叶分类

保存与加载PyTorch训练的模型和超参数–基于kaggle树叶分类

​ 如今,在深度学习训练(炼丹)所需要的时间以数小时起步的现状下,如果不保存训练好的最佳权重参数,或者一旦突然中断训练,会耗费很多时间成本。本篇将介绍如何使用pytorch保存你已训练得到的最佳参数,并且在下次继续训练时能以此参数为起点。

模型参数的保存

​ 在pytorch中,我们可以使用使用torch.save(checkpoint, model_path)来保存checkpoint字典,其中checkpoint字典中存有你所想保存的参数,如模型权重参数model.state_dict(),优化器状态optimizer.state_dict(),最佳准确度best_acc以及上一次训练中断时的epoch

model_path = './pre_res_model.ckpt' #保存模型文件的地址
    # 若本轮训练得到的模型更优,则保存相关检查点
    if valid_acc > best_acc:
        best_acc = valid_acc
        # 只把最好的结果保存起来,否则不更新权重
        #checkpoint中放你想存的信息
        checkpoint = {
        'model_state_dict': model.state_dict(), # 模型权重参数
        'optimizer_state_dict': optimizer.state_dict(), #优化器状态
        'best_acc': best_acc, # 最佳验证准确率
        'epoch': epoch # 假设你想要保存当前的训练轮次
        }
        # 保存模型参数,此处model_path = './pre_res_model.ckpt'
        torch.save(checkpoint, model_path)  
        print(colored('saving model with acc {:.3f}'.format(best_acc), 'red'))

​ 这段代码需加载至训练模块,这样,我们就把训练过程中得到的最优模型参数保存到了pre_res_model.ckpt文件中。之后,在你需要继续跑模型或增删改查模型时可以将此参数文件导入,减少下次实验所消耗的时间成本。

在这里插入图片描述
​ 此处我们在epoch = 4 完成后中断了训练,并保存了此时的最优模型acc = 0.799,文件保存在了当前的根目录下。接下来我们试试实现加载这个文件,以备后续使用。

模型参数的加载

​ 我们将实现一个函数load_best_model(),以将保存的.ckpt文件进行加载,并获取保存的相应信息;函数的返回值为文件保存的最优准确率best_acc和此轮的epoch值。

在这里插入图片描述

在pytorch中我们可以使用torch.load()加载所保存的模型参数文件,读取后所有信息将存放在一个字典中,其格式与之前文件保存时所定义的checkpoint字典一致。在之后,使用model.load_state_dict()optimizer.load_state_dict()函数从读取到的字典中获取对应的信息,具体实现如下所示的代码:

# 加载保存的最佳模型状态
def load_best_model(model, optimizer, model_path):
    # 检查文件是否存在
    if not os.path.isfile(model_path):
        print(f"No checkpoint found at {model_path}, '你要从头重新训练!!!'")
        #初始化参数
        best_acc = 0.0
        start_epoch = 0
        return best_acc, start_epoch

    # 加载检查点文件
    checkpoint = torch.load(model_path, map_location = torch.device('cuda' if torch.cuda.is_available() else 'cpu'))

    # 将模型参数加载到模型中
     # 确保 checkpoint 字典中有 'state_dict' 这个键
    if 'model_state_dict' in checkpoint:
        model.load_state_dict(checkpoint['model_state_dict'])
        print(f"Loaded model from {model_path}")
    else:
        print(f"No 'state_dict' found in checkpoint at {model_path}")

    # 如果检查点包含优化器状态,也可以在这里加载优化器状态
    if 'optimizer_state_dict' in checkpoint:
        optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
        print(f"Loaded 'optimizer' from {model_path}")
    else:
        print(f"No 'optimizer' found in checkpoint at {model_path}")
    
    # 如果检查点包含best_acc状态,也可以在这里加载best_acc状态
    if 'best_acc' in checkpoint:
        best_acc = checkpoint['best_acc']  # 加载以往最佳准确率信息
        print(f"best_acc = {best_acc}")
    else:
        best_acc = 0.0
        print("No best_acc in checkpoint. Training will start from best_acc = 0.0")
    
    # 如果检查点包含epoch状态,也可以在这里加载epoch状态
    if 'epoch' in checkpoint:
        start_epoch = checkpoint['epoch'] + 1  # 加1因为我们要从下一轮开始训练
        print(f"Resuming from epoch {start_epoch}")
    else:
        start_epoch = 0
        print("No epoch found in checkpoint. Training will start from epoch 0")
    #返回加载的参数
    return best_acc, start_epoch

​ 既然我们成功的保存并加载了模型,我们就来看看下一次的训练是什么情况吧!

在这里插入图片描述
​ 可以看到,下一次训练直接从epoch = 5开始了,并且发现此轮的验证准确率 acc = 0.76884,小于我们上一次训练所保存的最优准确率best_acc = 0.7992 ,估第5轮训练未对模型参数进行保存更新;直到epoch = 6时,验证准确率大于best_acc才进行了保存。

完整可运行代码展示

​ 此项目是基于Classify Leaves | Kaggle完成的,项目的数据集可在此下载,并借鉴了simple resnet baseline (kaggle.com)大神提供的baseline代码完成,之所以增加模型保存和加载功能,是因为我的笔记本训练太慢,好几次得到不满意的结果又要调整代码,且时常需要搬动,不能一直跑代码。估加入此功能,感觉确实舒适了很多。希望对各位同学能有所帮助,有问题欢迎指正讨论。

# %%
# 首先导入包
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
from PIL import Image
import os
import matplotlib.pyplot as plt
import torchvision.models as models
# This is for the progress bar.
from tqdm import tqdm
import seaborn as sns

# %%
labels_dataframe = pd.read_csv('./classify-leaves/train.csv')
labels_dataframe.head(3)

# %%
labels_dataframe.describe()

# %%
# 把label文件排个序
leaves_labels = sorted(list(set(labels_dataframe['label'])))
n_classes = len(leaves_labels)  #176个类别
print(n_classes)
leaves_labels[:10]

# %%
# 把label转成对应的数字(其实就是把类别名与编号做对一一映射)
class_to_num = dict(zip(leaves_labels, range(n_classes)))
class_to_num

# %%
# 再转换回来,方便最后预测的时候使用
num_to_class = {v : k for k, v in class_to_num.items()}

# %%
# 继承pytorch的dataset,创建自己的
class LeavesData(Dataset):
    def __init__(self, csv_path, file_path, mode='train', valid_ratio=0.2, resize_height=256, resize_width=256):
        """
        Args:
            csv_path (string): csv 文件路径
            img_path (string): 图像文件所在路径
            mode (string): 训练模式还是测试模式
            valid_ratio (float): 验证集比例
        """
        # 需要调整后的照片尺寸,我这里每张图片的大小尺寸不一致#
        self.resize_height = resize_height
        self.resize_width = resize_width

        self.file_path = file_path
        self.mode = mode

        # 读取 csv 文件
        # 利用pandas读取csv文件
        self.data_info = pd.read_csv(csv_path, header=None)  #header=None是去掉表头部分
        # 计算 length
        self.data_len = len(self.data_info.index) - 1
        self.train_len = int(self.data_len * (1 - valid_ratio))
        
        if mode == 'train':
            # 第一列包含图像文件的名称
            self.train_image = np.asarray(self.data_info.iloc[1:self.train_len, 0])  
            #self.data_info.iloc[1:,0]表示读取第一列,从第二行开始到train_len
            # 第二列是图像的 label
            self.train_label = np.asarray(self.data_info.iloc[1:self.train_len, 1])
            self.image_arr = self.train_image 
            self.label_arr = self.train_label
        elif mode == 'valid':
            self.valid_image = np.asarray(self.data_info.iloc[self.train_len:, 0])  
            self.valid_label = np.asarray(self.data_info.iloc[self.train_len:, 1])
            self.image_arr = self.valid_image
            self.label_arr = self.valid_label
        elif mode == 'test':
            self.test_image = np.asarray(self.data_info.iloc[1:, 0])
            self.image_arr = self.test_image
            
        self.real_len = len(self.image_arr)

        print('Finished reading the {} set of Leaves Dataset ({} samples found)'
              .format(mode, self.real_len))

    def __getitem__(self, index):
        # 从 image_arr中得到索引对应的文件名
        single_image_name = self.image_arr[index]

        # 读取图像文件
        img_as_img = Image.open(self.file_path + single_image_name)

        #如果需要将RGB三通道的图片转换成灰度图片可参考下面两行
#         if img_as_img.mode != 'L':
#             img_as_img = img_as_img.convert('L')

        #设置好需要转换的变量,还可以包括一系列的nomarlize等等操作
        if self.mode == 'train':
            transform = transforms.Compose([
                transforms.Resize((224, 224)),
                transforms.RandomHorizontalFlip(p=0.5),   #随机水平翻转 选择一个概率
                transforms.ToTensor()
            ])
        else:
            # valid和test不做数据增强
            transform = transforms.Compose([
                transforms.Resize((224, 224)),
                transforms.ToTensor()
            ])
        
        img_as_img = transform(img_as_img)
        
        if self.mode == 'test':
            return img_as_img
        else:
            # 得到图像的 string label
            label = self.label_arr[index]
            # number label
            number_label = class_to_num[label]

            return img_as_img, number_label  #返回每一个index对应的图片数据和对应的label

    def __len__(self):
        return self.real_len


# %%
train_path = './classify-leaves/train.csv'
test_path = './classify-leaves/test.csv'
# csv文件中已经images的路径了,因此这里只到上一级目录
img_path = './classify-leaves/'

train_dataset = LeavesData(train_path, img_path, mode='train')
val_dataset = LeavesData(train_path, img_path, mode='valid')
test_dataset = LeavesData(test_path, img_path, mode='test')
print(train_dataset)
print(val_dataset)
print(test_dataset)

# %%
batch_size = 32

# %%
# 定义data loader
train_loader = torch.utils.data.DataLoader(
        dataset=train_dataset,
        batch_size=batch_size, 
        shuffle=False
    )

val_loader = torch.utils.data.DataLoader(
        dataset=val_dataset,
        batch_size=batch_size, 
        shuffle=False
    )
test_loader = torch.utils.data.DataLoader(
        dataset=test_dataset,
        batch_size=batch_size, 
        shuffle=False
    )

# %%
# 看一下是在cpu还是GPU上
def get_device():
    return 'cuda' if torch.cuda.is_available() else 'cpu'

device = get_device()
print(device)

# %%
# 是否要冻住模型的前面一些层
def set_parameter_requires_grad(model, feature_extracting):
    if feature_extracting:
        model = model
        for param in model.parameters():
            param.requires_grad = False
            
# resnet34模型
def res_model(num_classes, feature_extract = False, use_pretrained=True):
    model_ft = models.resnet34(pretrained=use_pretrained)
    set_parameter_requires_grad(model_ft, feature_extract)
    num_ftrs = model_ft.fc.in_features
    model_ft.fc = nn.Sequential(nn.Linear(num_ftrs, num_classes))
    return model_ft

# %%
# 超参数
learning_rate = 3e-4
weight_decay = 1e-3
num_epoch = int(66)
model_path = './pre_res_model.ckpt'

# %%
from termcolor import colored

# %%
# Initialize a model, and put it on the device specified.
model = res_model(176)
model = model.to(device)
model.device = device
# For the classification task, we use cross-entropy as the measurement of performance.
criterion = nn.CrossEntropyLoss()

# Initialize optimizer, you may fine-tune some hyperparameters such as learning rate on your own.
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate, weight_decay=weight_decay)

# The number of training epochs.
n_epochs = num_epoch

# %%
# 加载保存的最佳模型状态
def load_best_model(model, optimizer, model_path):
    # 检查文件是否存在
    if not os.path.isfile(model_path):
        print(f"No checkpoint found at {model_path}, '你要从头重新训练!!!'")
        #初始化参数
        best_acc = 0.0
        start_epoch = 0
        return best_acc, start_epoch

    # 加载检查点文件
    checkpoint = torch.load(model_path, map_location = torch.device('cuda' if torch.cuda.is_available() else 'cpu'))

    # 将模型参数加载到模型中
     # 确保 checkpoint 字典中有 'state_dict' 这个键
    if 'model_state_dict' in checkpoint:
        model.load_state_dict(checkpoint['model_state_dict'])
        print(f"Loaded model from {model_path}")
    else:
        print(f"No 'state_dict' found in checkpoint at {model_path}")

    # 如果检查点包含优化器状态,也可以在这里加载优化器状态
    if 'optimizer_state_dict' in checkpoint:
        optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
        print(f"Loaded 'optimizer' from {model_path}")
    else:
        print(f"No 'optimizer' found in checkpoint at {model_path}")
    
    # 如果检查点包含best_acc状态,也可以在这里加载best_acc状态
    if 'best_acc' in checkpoint:
        best_acc = checkpoint['best_acc']  # 加载以往最佳准确率信息
        print(f"best_acc = {best_acc}")
    else:
        best_acc = 0.0
        print("No best_acc in checkpoint. Training will start from best_acc = 0.0")
    
    # 如果检查点包含epoch状态,也可以在这里加载epoch状态
    if 'epoch' in checkpoint:
        start_epoch = checkpoint['epoch'] + 1  # 加1因为我们要从下一轮开始训练
        print(f"Resuming from epoch {start_epoch}")
    else:
        start_epoch = 0
        print("No epoch found in checkpoint. Training will start from epoch 0")
    #返回加载的参数
    return best_acc, start_epoch
    
# %%
best_acc, start_epoch = load_best_model(model = model, optimizer = optimizer, model_path = model_path)

# %%
for epoch in range(start_epoch, n_epochs):
    # ---------- Training ----------
    # Make sure the model is in train mode before training.
    #打印epoch轮次
    print(f"epoch = {epoch + 1}")
    
    model.train() 
    # These are used to record information in training.
    train_loss = []
    train_accs = []
    # Iterate the training set by batches.
    for batch in tqdm(train_loader):
        # A batch consists of image data and corresponding labels.
        imgs, labels = batch
        imgs = imgs.to(device)
        labels = labels.to(device)
        # Forward the data. (Make sure data and model are on the same device.)
        logits = model(imgs)
        # Calculate the cross-entropy loss.
        # We don't need to apply softmax before computing cross-entropy as it is done automatically.
        loss = criterion(logits, labels)
        
        # Gradients stored in the parameters in the previous step should be cleared out first.
        optimizer.zero_grad()
        # Compute the gradients for parameters.
        loss.backward()
        # Update the parameters with computed gradients.
        optimizer.step()
        
        # Compute the accuracy for current batch.
        acc = (logits.argmax(dim=-1) == labels).float().mean()

        # Record the loss and accuracy.
        train_loss.append(loss.item())
        train_accs.append(acc)
        
    # The average loss and accuracy of the training set is the average of the recorded values.
    train_loss = sum(train_loss) / len(train_loss)
    train_acc = sum(train_accs) / len(train_accs)

    # Print the information.
    print(f"[ Train | {epoch + 1:03d}/{n_epochs:03d} ] loss = {train_loss:.5f}, acc = {train_acc:.5f}")  
    
    # ---------- Validation ----------
    # Make sure the model is in eval mode so that some modules like dropout are disabled and work normally.
    model.eval()
    # These are used to record information in validation.
    valid_loss = []
    valid_accs = []
    # Iterate the validation set by batches.
    for batch in tqdm(val_loader):
        imgs, labels = batch
        # We don't need gradient in validation.
        # Using torch.no_grad() accelerates the forward process.
        with torch.no_grad():
            logits = model(imgs.to(device))
            
        # We can still compute the loss (but not the gradient).
        loss = criterion(logits, labels.to(device))

        # Compute the accuracy for current batch.
        acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean()

        # Record the loss and accuracy.
        valid_loss.append(loss.item())
        valid_accs.append(acc)
        
    # The average loss and accuracy for entire validation set is the average of the recorded values.
    valid_loss = sum(valid_loss) / len(valid_loss)
    valid_acc = sum(valid_accs) / len(valid_accs)

    # Print the information.
    print(f"[ Valid | {epoch + 1:03d}/{n_epochs:03d} ] loss = {valid_loss:.5f}, acc = {valid_acc:.5f}")
    
    
    # 若本轮训练得到的模型更优,则保存相关检查点
    if valid_acc > best_acc:
        best_acc = valid_acc
        # 只把最好的结果保存起来,否则不更新权重
        #checkpoint中放你想存的信息
        checkpoint = {
        'model_state_dict': model.state_dict(), # 模型权重参数
        'optimizer_state_dict': optimizer.state_dict(), #优化器状态
        'best_acc': best_acc, # 最佳验证准确率
        'epoch': epoch # 假设你想要保存当前的训练轮次
        }
        torch.save(checkpoint, model_path)
        print(colored('saving model with acc {:.3f}'.format(best_acc), 'red'))


# %%
saveFileName = './classify-leaves/submission.csv'

# %%
## predict
model = res_model(176)

# create model and load weights from checkpoint
model = model.to(device)
model.load_state_dict(torch.load(model_path))

# Make sure the model is in eval mode.
# Some modules like Dropout or BatchNorm affect if the model is in training mode.
model.eval()

# Initialize a list to store the predictions.
predictions = []
# Iterate the testing set by batches.
for batch in tqdm(test_loader):
    
    imgs = batch
    with torch.no_grad():
        logits = model(imgs.to(device))
    
    # Take the class with greatest logit as prediction and record it.
    predictions.extend(logits.argmax(dim=-1).cpu().numpy().tolist())

preds = []
for i in predictions:
    preds.append(num_to_class[i])

test_data = pd.read_csv(test_path)
test_data['label'] = pd.Series(preds)
submission = pd.concat([test_data['image'], test_data['label']], axis=1)
submission.to_csv(saveFileName, index=False)
print("Done!!!!!!!!!!!!!!!!!!!!!!!!!!!")

参考文献

https://www.kaggle.com/c/classify-leaves

https://www.kaggle.com/nekokiku/simple-resnet-baseline

  • 4
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值