深度学习-数据集构建(一)

本文仅仅做笔记使用

一、数据集构建

1、数据集构建

引用:Pytorch迁移学习训练自己的图像分类模型【两天搞定AI毕设】_哔哩哔哩_bilibili

1.1、载入图像分类数据集

此数据集文件:train训练集文件夹下有水果种类个数的文件夹:每个文件夹下有每个种类的图片编号

datasets.ImageFolder函数处理的数据集类型

【深度学习】datasets.ImageFolder 使用方法-CSDN博客

# 数据预处理
# 训练集图像预处理:缩放裁剪、图像增强
train_transform = transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),
transforms.ToTensor(),transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])

# 测试集图像预处理-RCTN:缩放、裁剪、转Tensor、归一化
transforms.Compose([transforms.ResizedCrop(256),transforms.CenterCrop(),
transforms.ToTensor(),transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])

# 数据集文件路径
dataset_dir='fruit30_split'
train_path=os.path.join(dataset_dir,'train')
test_path=os.path.join(dataset_dir,'val')

#  载入数据集
from torchvision import datasets
#  载入数据集
train_dataset=datasets.ImageFolder(train_path,train_transtorm)

# 载入测试机
test_dataset=datasets.ImageFolder(test_path,test_transform)

#  返回一个视图对象
# 字典的items函数
dictionary.items

# 保存为本地的npy数据

np.save('idx_to_labels.npy',idx_to_labels)
np.save('labels_to_idx.npy',train_dataset.class_to_idx)

1.2、定义数据加载器DataLoader

引用:Pytorch迁移学习训练自己的图像分类模型【两天搞定AI毕设】_哔哩哔哩_bilibili

定义数据加载器DataLoader  构造数据迭代器

from torch.unitils.data import DataLoader

Batch_size=32

#  训练集的数据集加载器
train_loader = DataLoader(train_dataset,batch_size =Batch_size=
         shuffle=True,
         num_workers = 4)

# 测试集的数据加载器
test_loader= DataLoader(test_dataset,
            batch_size = Batch_size,shuffle=False,num_workers=4)



查看一个batch的图标和标注

images,labels=next(iter(train_loader))

1.3、训练与测试

#  用dataset.ImageFolder 读取数据  DataLoader加载数据

for epoch int tqdm(range(EPOCHS)):
     model.train()

     for images, label in train_loader:
         images=images.to(device)
         labels = labels.to(device)
         outputs = model(images)
         loss = criterion(outputs,labels)

         optimizer.zero_grad()
         loss.backward()
         optimizer.step()





在测试机上初步测试

model.eval()
with torch.no_grad():
    correct=0
    total=0
    for images,labels in tqdm(test_loader):
         images=images.to(device)
         labels = labels.to(device)
         outputs = model(images)
         _,preds=torch.max(outputs,1)
         total+=labels.size(0)
        correct+=(preds==labels).sum()
        
        print('测试机上的准确率{:.3f}'.format(100*correct/total))

1.4、模型保存

torch.save(model,'checkpoints/fruit30_pytorch_20220814.pth')

引用:Pytorch模型保存与加载,并在加载的模型基础上继续训练 - 简书 (jianshu.com)

1.5、模型加载

pytorch保存模型非常简单,主要有两种方法:

  1. 只保存参数;(官方推荐)
  2. 保存整个模型 (结构+参数)。
    由于保存整个模型将耗费大量的存储,故官方推荐只保存参数,然后在建好模型的基础上加载。本文介绍两种方法,但只就第一种方法进行举例详解。

1.6、引用:模型保存与加载

Pytorch模型保存与加载,并在加载的模型基础上继续训练 - 简书 (jianshu.com)

PyTorch模型保存深入理解 - 简书 (jianshu.com)

引用:[ pytorch ] 基本使用丨2. 训练好的模型参数的保存以及调用丨_保存训练好的模型并调用torch-CSDN博客

Pytorch:模型的保存与加载 torch.save()、torch.load()、torch.nn.Module.load_state_dict()-CSDN博客

代码:

def modelfunc(nn.Module):
    # 之前定义好的模型
    def __init__(self, class_num=3):
        super(modelfunc, self).__init__()
        ...
    def forward(self,x):
        ...
        return x

# 由于pytorch没有像keras那样有保存模型结构的API,因此,每次load之前必须找到模型的结构。

model_object = modelfunc(class_num=3) # 导入模型结构

# 保存和加载整个模型  
torch.save(model_object, 'model.pth')  
model = torch.load('model.pth')  
     
# 仅保存和加载模型参数  
torch.save(model_object.state_dict(), 'params.pth')  
model_object.load_state_dict(torch.load('params.pth'))  

输出:

输出

1.6.1、只保存参数
1.保存

一般地,采用一条语句即可保存参数:

torch.save(model.state_dict(), path)

其中model指定义的模型实例变量,如 model=vgg16( ), path是保存参数的路径,如 path='./model.pth' , path='./model.tar', path='./model.pkl', 保存参数的文件一定要有后缀扩展名。

特别地,如果还想保存某一次训练采用的优化器、epochs等信息,可将这些信息组合起来构成一个字典,然后将字典保存起来:

state = {'model': model.state_dict(), 'optimizer': optimizer.state_dict(), 'epoch': epoch}
torch.save(state, path)
2.加载

针对上述第一种情况,也只需要一句即可加载模型:

model.load_state_dict(torch.load(path))

针对上述第二种以字典形式保存的方法,加载方式如下:

checkpoint = torch.load(path)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint(['epoch'])

需要注意的是,只保存参数的方法在加载的时候要事先定义好跟原模型一致的模型,并在该模型的实例对象(假设名为model)上进行加载,即在使用上述加载语句前已经有定义了一个和原模型一样的Net, 并且进行了实例化 model=Net( ) 。

另外,如果每一个epoch或每n个epoch都要保存一次参数,可设置不同的path,如 path='./model' + str(epoch) +'.pth',这样,不同epoch的参数就能保存在不同的文件中,选择保存识别率最大的模型参数也一样,只需在保存模型语句前加个if判断语句即可。

下面给出一个具体的例子程序,该程序只保存最新的参数:

#-*- coding:utf-8 -*-

'''本文件用于举例说明pytorch保存和加载文件的方法'''

__author__ = 'puxitong from UESTC'


import torch as torch
import torchvision as tv
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as transforms
from torchvision.transforms import ToPILImage
import torch.backends.cudnn as cudnn
import datetime
import argparse

# 参数声明
batch_size = 32
epochs = 10
WORKERS = 0   # dataloder线程数
test_flag = True  #测试标志,True时加载保存好的模型进行测试 
ROOT = '/home/pxt/pytorch/cifar'  # MNIST数据集保存路径
log_dir = '/home/pxt/pytorch/logs/cifar_model.pth'  # 模型保存路径

# 加载MNIST数据集
transform = tv.transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])

train_data = tv.datasets.CIFAR10(root=ROOT, train=True, download=True, transform=transform)
test_data = tv.datasets.CIFAR10(root=ROOT, train=False, download=False, transform=transform)

train_load = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=WORKERS)
test_load = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=WORKERS)


# 构造模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, 3, padding=1)
        self.conv2 = nn.Conv2d(64, 128, 3, padding=1)
        self.conv3 = nn.Conv2d(128, 256, 3, padding=1)
        self.conv4 = nn.Conv2d(256, 256, 3, padding=1)
        self.pool = nn.MaxPool2d(2, 2)
        self.fc1 = nn.Linear(256 * 8 * 8, 1024)
        self.fc2 = nn.Linear(1024, 256)
        self.fc3 = nn.Linear(256, 10)
    
    
    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = self.pool(F.relu(self.conv2(x)))
        x = F.relu(self.conv3(x))
        x = self.pool(F.relu(self.conv4(x)))
        x = x.view(-1, x.size()[1] * x.size()[2] * x.size()[3])
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x


model = Net().cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)


# 模型训练
def train(model, train_loader, epoch):
    model.train()
    train_loss = 0
    for i, data in enumerate(train_loader, 0):
        x, y = data
        x = x.cuda()
        y = y.cuda()
        optimizer.zero_grad()
        y_hat = model(x)
        loss = criterion(y_hat, y)
        loss.backward()
        optimizer.step()
        train_loss += loss
    loss_mean = train_loss / (i+1)
    print('Train Epoch: {}\t Loss: {:.6f}'.format(epoch, loss_mean.item()))

# 模型测试
def test(model, test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for i, data in enumerate(test_loader, 0):
            x, y = data
            x = x.cuda()
            y = y.cuda()
            optimizer.zero_grad()
            y_hat = model(x)
            test_loss += criterion(y_hat, y).item()
            pred = y_hat.max(1, keepdim=True)[1]
            correct += pred.eq(y.view_as(pred)).sum().item()
        test_loss /= (i+1)
        print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, correct, len(test_data), 100. * correct / len(test_data)))


def main():

    # 如果test_flag=True,则加载已保存的模型
    if test_flag:
        # 加载保存的模型直接进行测试机验证,不进行此模块以后的步骤
        checkpoint = torch.load(log_dir)
        model.load_state_dict(checkpoint['model'])
        optimizer.load_state_dict(checkpoint['optimizer'])
        epochs = checkpoint['epoch']
        test(model, test_load)
        return

    for epoch in range(0, epochs):
        train(model, train_load, epoch)
        test(model, test_load)
        # 保存模型
        state = {'model':model.state_dict(), 'optimizer':optimizer.state_dict(), 'epoch':epoch}
        torch.save(state, log_dir)

if __name__ == '__main__':
    main()
3.在加载的模型基础上继续训练

在训练模型的时候可能会因为一些问题导致程序中断,或者常常需要观察训练情况的变化来更改学习率等参数,这时候就需要加载中断前保存的模型,并在此基础上继续训练,这时候只需要对上例中的 main() 函数做相应的修改即可,修改后的 main() 函数如下:

def main():

    # 如果test_flag=True,则加载已保存的模型
    if test_flag:
        # 加载保存的模型直接进行测试机验证,不进行此模块以后的步骤
        checkpoint = torch.load(log_dir)
        model.load_state_dict(checkpoint['model'])
        optimizer.load_state_dict(checkpoint['optimizer'])
        start_epoch = checkpoint['epoch']
        test(model, test_load)
        return

    # 如果有保存的模型,则加载模型,并在其基础上继续训练
    if os.path.exists(log_dir):
        checkpoint = torch.load(log_dir)
        model.load_state_dict(checkpoint['model'])
        optimizer.load_state_dict(checkpoint['optimizer'])
        start_epoch = checkpoint['epoch']
        print('加载 epoch {} 成功!'.format(start_epoch))
    else:
        start_epoch = 0
        print('无保存模型,将从头开始训练!')

    for epoch in range(start_epoch+1, epochs):
        train(model, train_load, epoch)
        test(model, test_load)
        # 保存模型
        state = {'model':model.state_dict(), 'optimizer':optimizer.state_dict(), 'epoch':epoch}
        torch.save(state, log_dir)

以上方法,如果想在命令行进行操作执行,都只需加入argpase模块参数即可,相关方法可参考我的博客

1.6.2、保存整个模型
1.保存

torch.save(model, path)
2.加载

model = torch.load(path)


上述引用链接:https://www.jianshu.com/p/1cd6333128a1
 

1.7、保存预测值与真实值

import numpy as np
a = np.array([1,2,3])
b = np.array([4,5,6])
np.savetxt('1.txt',(a,b))
# 此时文件夹中多了一个1.txt

c = np.loadtxt('1.txt')
print(c.shape)  # (2,3)   两行三列

c[0] = np.array([1,2,3])。。。 

2、另一种数据集加载方法

重载Dataset学习方法:当数据名称前边带有标签时候用这种

引用:深度学习-自定义数据集 - 知乎 (zhihu.com)

pytorch 构建自己的数据集,用来训练_pytorch如何构建数据集-CSDN博客

pytorch 构建自己的数据集,用来训练_pytorch如何构建数据集-CSDN博客

Pytorch学习(三)定义自己的数据集及加载训练_pytorch如何自定义数据集训练使用-CSDN博客

Datasets的整体框架及解说

class FirstDataset(data.Dataset):#需要继承data.Dataset
    def __init__(self):
        # TODO
        # 1. 初始化文件路径或文件名列表。
        #也就是在这个模块里,我们所做的工作就是初始化该类的一些基本参数。
        pass
    def __getitem__(self, index):
        # TODO

        #1。从文件中读取一个数据(例如,使用numpy.fromfile,PIL.Image.open)。
         #2。预处理数据(例如torchvision.Transform)。
         #3。返回数据对(例如图像和标签)。
        #这里需要注意的是,第一步:read one data,是一个data
        pass
    def __len__(self):
        # 您应该将0更改为数据集的总大小。

自定义的框架

  #***************************一些必要的包的调用********************************
    import torch.nn.functional as F
    import torch
    import torch 
    import torch.nn as nn
    from torch.autograd import Variable
    import torchvision.models as models
    from torchvision import transforms, utils
    from torch.utils.data import Dataset, DataLoader
    from PIL import Image
    import numpy as np
    import torch.optim as optim
    import os
    #***************************初始化一些函数********************************
    #torch.cuda.set_device(gpu_id)#使用GPU
    learning_rate = 0.0001#学习率的设置
    
    #*************************************数据集的设置****************************************************************************
    root =os.getcwd()+ '/data1/'#数据集的地址
     #定义读取文件的格式
    def default_loader(path):
        return Image.open(path).convert('RGB')

    class MyDataset(Dataset): 
                                 #创建自己的类: MyDataset,这个类是继承的torch.utils.data.Dataset
            #**********************************  #使用__init__()初始化一些需要传入的参数及数据集的调用**********************
	 def __init__(self,txt, transform=None,target_transform=None, loader=default_loader):
	                          
		super(MyDataset,self).__init__()
		                       #对继承自父类的属性进行初始化
		fh = open(txt, 'r')
		            #按照传入的路径和txt文本参数,以只读的方式打开这个文本
		for line in fh: #迭代该列表#按行循环txt文本中的内
			line = line.strip('\n')
			line = line.rstrip('\n')
			             # 删除 本行string 字符串末尾的指定字符,这个方法的详细介绍自己查询python
			words = line.split()
			           #用split将该行分割成列表  split的默认参数是空格,所以不传递任何参数时分割空格
			imgs.append((words[0],int(words[1])))
			         #把txt里的内容读入imgs列表保存,具体是words几要看txt内容而定 
                    # 很显然,根据我刚才截图所示txt的内容,words[0]是图片信息,words[1]是lable       
		self.imgs = imgs
		self.transform = transform
		self.target_transform = target_transform
		self.loader = loader        
            #*************************** #使用__getitem__()对数据进行预处理并返回想要的信息**********************
	def __getitem__(self, index):#这个方法是必须要有的,用于按照索引读取每个元素的具体内容
		fn, label = self.imgs[index]
		                   #fn是图片path #fn和label分别获得imgs[index]也即是刚才每行中word[0]和word[1]的信息
		img = self.loader(fn) 
		                  # 按照路径读取图片
		if self.transform is not None:
			img = self.transform(img) 
			                    #数据标签转换为Tensor
		return img,label
		                  #return回哪些内容,那么我们在训练时循环读取每个batch时,就能获得哪些内容
		     #**********************************  #使用__len__()初始化一些需要传入的参数及数据集的调用**********************
	def __len__(self):
	        #这个函数也必须要写,它返回的是数据集的长度,也就是多少张图片,要和loader的长度作区分
		return len(self.imgs)
train_data=MyDataset(txt=root+'train.txt', transform=transforms.ToTensor())
test_data = MyDataset(txt=root+'text.txt', transform=transforms.ToTensor())

2.1、数据集读取

import glob
from torchvision import transforms
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import numpy as np
import os
from PIL import Image

# 这是我读取标签的代码:
class ImageDataset(Dataset):
    def __init__(self, root_dir, hp_dir):
        super(ImageDataset,self).__init__()
        self.root_dir = root_dir
        self.transform = transforms.Compose([
            # transforms.Resize((self.resize, self.resize)),
            transforms.ToTensor(),])   # ToTensor转换图像数据0-255到 0.0-1.0并转换为张量


        self.samples = []
        # self.class_labels = ['normal', 'B014', 'B021', 'IR007', 'IR014', 'IR021', 'Normal', 'OR007', 'OR014', 'OR021']
        # self.class_labels = ['Normal', 'Ba10mm', 'Ba15mm','Ba20mm','IR10mm', 'IR15mm', 'IR20mm', 'OR10mm', 'OR15mm', 'OR20mm']
        self.class_labels = ['Ba10mm', 'Ba15mm']
        hp_dir_path = os.path.join(root_dir, hp_dir)
        if os.path.isdir(hp_dir_path):
            for class_label in self.class_labels:
                class_dir_path = os.path.join(hp_dir_path, class_label)
                if os.path.isdir(class_dir_path):
                    image_files = glob.glob(os.path.join(class_dir_path, "*.png"))
                    self.samples += [(image_file, class_label) for image_file in image_files]  # 列表+号为组合一块,具体见list文档
        print(len(self.samples))


    def __len__(self):
        return len(self.samples)


    def __getitem__(self, index):
        image_file, class_label = self.samples[index]
        image = Image.open(image_file).convert('RGB')  # 转换为RGB通道数据,不然就为四通道数据
        if self.transform:
            image = self.transform(image)
        # 将类别标签转换为整数
        class_index = self.class_labels.index(class_label)
        return image, class_index

(39 封私信 / 81 条消息) 加载数据集的时候经常用到def __getitem__(self, index):具体怎么理解它呢? - 知乎 (zhihu.com)

Pytorch中Tensor与各种图像格式的相互转换、读取和展示_python jpg图像 的tensor保存以后再读取,转换出来的tensor最后一列值变了-CSDN博客

在上述代码中,其中的代码def __getitem__(self, index):  中index的范围:因此,__getitem__方法的索引范围与数据集中的样本数量相对应,为def __len__(self): return len(self.samples)中的self.sample从0到len(self.samples) - 1

2.2、数据集分割

函数:train_test_split

# 具体参考 纯数据 数据集如何划分  训练集和测试集体
# sklearn函数:train_test_split(分割训练集和测试集)   # # https://zhuanlan.zhihu.com/p/248634166

X_train, X_test, y_train, y_test = train_test_split(X.numpy(), y.numpy(), test_size=0.8)  
# split输入的数据集必须转换成


import torch
from torch.utils.data import random_split
dataset = range(10)
train_dataset, test_dataset = random_split(
    dataset=dataset,
    lengths=[7, 3],
    generator=torch.Generator().manual_seed(0)
)
print(list(train_dataset))
print(list(test_dataset))

2.3、数据集加载

来源:pytorch 构建自己的数据集,用来训练_pytorch如何构建数据集-CSDN博客

import glob
from torchvision import transforms
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import numpy as np
import os
from PIL import Image

# 这是我读取标签的代码:
class ImageDataset(Dataset):
    def __init__(self, root_dir, hp_dir):
        super(ImageDataset,self).__init__()
        self.root_dir = root_dir
        self.transform = transforms.Compose([
            # transforms.Resize((self.resize, self.resize)),
            transforms.ToTensor(),])


        self.samples = []
        # self.class_labels = ['normal', 'B014', 'B021', 'IR007', 'IR014', 'IR021', 'Normal', 'OR007', 'OR014', 'OR021']
        # self.class_labels = ['Normal', 'Ba10mm', 'Ba15mm','Ba20mm','IR10mm', 'IR15mm', 'IR20mm', 'OR10mm', 'OR15mm', 'OR20mm']
        self.class_labels = ['Ba10mm', 'Ba15mm']
        hp_dir_path = os.path.join(root_dir, hp_dir)
        if os.path.isdir(hp_dir_path):
            for class_label in self.class_labels:
                class_dir_path = os.path.join(hp_dir_path, class_label)
                if os.path.isdir(class_dir_path):
                    image_files = glob.glob(os.path.join(class_dir_path, "*.png"))
                    self.samples += [(image_file, class_label) for image_file in image_files]  # 列表+号为组合一块,具体见list文档
        print(len(self.samples))


    def __len__(self):
        return len(self.samples)


    def __getitem__(self, index):
        image_file, class_label = self.samples[index]
        image = Image.open(image_file).convert('RGB')  # 转换为RGB通道数据,不然就为四通道数据
        if self.transform:
            image = self.transform(image)
        # 将类别标签转换为整数
        class_index = self.class_labels.index(class_label)
        return image, class_index
#  使用标准类来构造数据 加载数据
import torch.utils.data as Data

myDataSet = myData()  # 实例化自己构建的数据集
train_loader = Data.DataLoader(dataset=myDataSet, batch_size=BATCH_SIZE, shuffle=False)

重构数据集:pytorch 构建自己的数据集,用来训练_pytorch如何构建数据集-CSDN博客

3、元学习/小样本数据集构建

1、Dataset类得继承

引用:PyTorch源码解析与实践(1):数据加载Dataset,Sampler与DataLoader - 知乎 (zhihu.com)meta-transfer-learning/pytorch/README.md at main · yaoyao-liu/meta-transfer-learning (github.com)

##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: Yaoyao Liu
## Modified from: https://github.com/Sha-Lab/FEAT
## Tianjin University
## liuyaoyao@tju.edu.cn
## Copyright (c) 2019
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
""" Dataloader for all datasets. """
import os.path as osp
import os
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms
import numpy as np

#  下面代码仅仅是一个Dataset类得重构用于构建数据集

class DatasetLoader(Dataset):
    """The class to load the dataset"""
    def __init__(self, setname, args, train_aug=False):
        # Set the path according to train, val and test        
        if setname=='train':
            THE_PATH = osp.join(args.dataset_dir, 'train')
            label_list = os.listdir(THE_PATH)
        elif setname=='test':
            THE_PATH = osp.join(args.dataset_dir, 'test')
            label_list = os.listdir(THE_PATH)
        elif setname=='val':
            THE_PATH = osp.join(args.dataset_dir, 'val')
            label_list = os.listdir(THE_PATH)
        else:
            raise ValueError('Wrong setname.') 

        # Generate empty list for data and label           
        data = []
        label = []

        # Get folders' name    os.path.join  用于路径拼接
        folders = [osp.join(THE_PATH, the_label) for the_label in label_list if os.path.isdir(osp.join(THE_PATH, the_label))]

        # Get the images' paths and labels   获取图像的路径和标签
        for idx, this_folder in enumerate(folders):
            this_folder_images = os.listdir(this_folder)
            for image_path in this_folder_images:
                data.append(osp.join(this_folder, image_path))
                label.append(idx)

        # Set data, label and class number to be accessable from outside
        self.data = data
        self.label = label
        self.num_class = len(set(label))

        # Transformation
        if train_aug:
            image_size = 80
            self.transform = transforms.Compose([
                transforms.Resize(92),
                transforms.RandomResizedCrop(88),
                transforms.CenterCrop(image_size),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                transforms.Normalize(np.array([x / 255.0 for x in [125.3, 123.0, 113.9]]), 
                                     np.array([x / 255.0 for x in [63.0, 62.1, 66.7]]))])
        else:
            image_size = 80
            self.transform = transforms.Compose([
                transforms.Resize(92),
                transforms.CenterCrop(image_size),
                transforms.ToTensor(),
                transforms.Normalize(np.array([x / 255.0 for x in [125.3, 123.0, 113.9]]),
                                     np.array([x / 255.0 for x in [63.0, 62.1, 66.7]]))])


    def __len__(self):
        return len(self.data)

    def __getitem__(self, i):
        path, label = self.data[i], self.label[i]
        image = self.transform(Image.open(path).convert('RGB'))
        return image, label

Pytorch中Tensor与各种图像格式的相互转换、读取和展示_python jpg图像 的tensor保存以后再读取,转换出来的tensor最后一列值变了-CSDN博客

1.1、torch.utils.Data.Dataset/DataLoader

1.2、DataLoader

引用:pytorch Dataloader Sampler参数深入理解_batch_sampler-CSDN博客

 def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None,
                 batch_sampler=None, num_workers=0, collate_fn=None,
                 pin_memory=False, drop_last=False, timeout=0,
                 worker_init_fn=None, multiprocessing_context=None):

1、RandomSampler

PyTorch学习笔记:data.RandomSampler——数据随机采样_pytorch 随机采样_视觉萌新、的博客-CSDN博客

from torch.utils.data import RandomSampler

sampler = RandomSampler(range(20))
print([i for i in sampler])

# 输出
[7, 17, 8, 1, 13, 9, 6, 4, 12, 18, 19, 14, 10, 3, 2, 16, 5, 15, 0, 11]

2、batch_sampler

引用:pytorch Dataloader Sampler参数深入理解_batch_sampler-CSDN博客

下面看两段取自DataLoader中的__init__代码, 帮助我们理解几个常用参数之间的关系

# data.RandomSampler——数据随机采样,返回数组元素得索引

	if sampler is None:  # give default samplers
	    if self._dataset_kind == _DatasetKind.Iterable:
	        # See NOTE [ Custom Samplers and IterableDataset ]
	        sampler = _InfiniteConstantSampler()
	    else:  # map-style
	        if shuffle:  # data.RandomSampler——数据随机采样,返回数组元素得索引
	            sampler = RandomSampler(dataset)
	        else:
	            sampler = SequentialSampler(dataset)

所以当我们sampler有输入时,shuffle的值就没有意义,后面我们再看sampler的定义方法

再看一段初始化代码

    if batch_size is not None and batch_sampler is None:
        # auto_collation without custom batch_sampler
        batch_sampler = BatchSampler(sampler, batch_size, drop_last)
        
    self.sampler = sampler
    self.batch_sampler = batch_sampler

再看看,BatchSampler的生成过程,, yield返回一个批量batch_size数据不够的返回剩余数据



# 略去类的初始化
    def __iter__(self):
        batch = []
        for idx in self.sampler:
            batch.append(idx)
            if len(batch) == self.batch_size:
                yield batch
                batch = []
        if len(batch) > 0 and not self.drop_last:  # 对不足batch—_size部分处理
            yield batch

yield: python中yield的用法详解——最简单,最清晰的解释_python yield-CSDN博客

再看batch的生成过程,每个batch都是由迭代器产生的

4、另一种构建元数据集方式

引用:PyTorch源码解析与实践(1):数据加载Dataset,Sampler与DataLoader - 知乎 (zhihu.com)

【精选】[Pytorch] Sampler, DataLoader和数据batch的形成_batchsampler_一步徐龙的浪的博客-CSDN博客

简要来说在pytorch中,Sampler负责决定读取数据时的先后顺序,DataLoader负责装载数据并根据Sampler提供的顺序安排数据,具体过程绘图和描述如下。

初始化DataLoader的时候需指定数据集Dataset(包括数据和标签),Sampler可选,没有Sampler时会根据是否打乱数据顺序(shuffle)分别采用顺序采样器(sequential sampler)和随机采样器(random sampler)。

第①步,Sampler首先根据Dataset的大小n形成一个可迭代的序号列表[0~n-1]。

第②步,BatchSampler根据DataLoader的batch_size参数将Sampler提供的序列划分成多个batch大小的可迭代序列组,drop_last参数决定是否保留最后一组。

第③步,兵分两路的Sampler(BatchSampler)和Dataset合二为一,在迭代读取DataLoader时,用BatchSampler中一个batch的编号查找Dataset中对应的数据和标签,读出一个batch数据。
————————————————
版权声明:本文为CSDN博主「一步徐龙的浪」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/longroad1216/article/details/114328618

4.1、BatchSampler

dataloader初始化: # data.RandomSampler——数据随机采样,返回数组元素得索引

if sampler is None:  # give default samplers
	    if self._dataset_kind == _DatasetKind.Iterable:
	        # See NOTE [ Custom Samplers and IterableDataset ]
	        sampler = _InfiniteConstantSampler()
	    else:  # map-style
	        if shuffle:  # data.RandomSampler——数据随机采样,返回数组元素得索引
	            sampler = RandomSampler(dataset)
 
	        else:
	            sampler = SequentialSampler(dataset)   
                # # data.RandomSampler——数据随机采样,返回数组元素得索引

__init__初始化各项参数

def __init__(self, sampler, batch_size, drop_last):
        # ...
        self.sampler = sampler
        self.batch_size = batch_size
        self.drop_last = drop_last

__iter__循环读取sampler生成的序号列表,采样够batch_size大小后,返回batch,下一次清空batch继续采集。这个与MAML原学习模型的sampler不一样

def __iter__(self):
        batch = []
        for idx in self.sampler:
            batch.append(idx)
            if len(batch) == self.batch_size:
                # 通过yield返回,下一个iter时清空batch继续采集
                yield batch
                batch = []
        # 如果不需drop最后一组返回最后一组
        if len(batch) > 0 and not self.drop_last:
            yield batch

__len__返回batch数量,如果drop最后一个,则序列长度对batch_size取整,否则加上一

def __len__(self):
        if self.drop_last:
            return len(self.sampler) // self.batch_size
        else:
            return (len(self.sampler) + self.batch_size - 1) // self.batch_size

元学习批量batch_sample自定义采样器

class CategoriesSampler():  # 生成偶发数据的类
    """The class to generate episodic data 生成偶发数据的类"""
    def __init__(self, label, n_batch, n_cls, n_per):
        self.n_batch = n_batch
        self.n_cls = n_cls
        self.n_per = n_per

        label = np.array(label)    # label转换为数组
        self.m_ind = []
        for i in range(max(label) + 1):
            ind = np.argwhere(label == i).reshape(-1)  # 返回元素等于i的数组元素的索引,其中a是要索引数组的条件,label必须为数组
            ind = torch.from_numpy(ind)    # 转换为张量Tensor
            self.m_ind.append(ind)

    def __len__(self):
        return self.n_batch
    def __iter__(self):
        for i_batch in range(self.n_batch):
            batch = []
            classes = torch.randperm(len(self.m_ind))[:self.n_cls]   # 随机打乱后获得的数字序列
            for c in classes:
                l = self.m_ind[c]
                pos = torch.randperm(len(l))[:self.n_per]
                batch.append(l[pos])
            batch = torch.stack(batch).t().reshape(-1)  # torch.stack(batch): torch.stack()函数将一个批量的张量(tensor)按照新的维度堆叠起来。
            # 这个新的维度被插入到指定的位置,默认情况下是第一个位置。   返回一个批量数据,way  shot
            yield batch

这次用for循环,for循环运行次数为批量次数,与默认batch_sample相比,默认采样器为样本数量+批量-1整除批量。

# Using tqdm to read samples from train loader   用tqdm读取装载机样本
            tqdm_gen = tqdm.tqdm(self.train_loader)   # 数据加载器
            for i, batch in enumerate(tqdm_gen, 1):

4.2、DataLoader

重要参数:

dataset(Dataset类):Dataset类型的输入数据,由数据和标签组成

batch_size(int类):同BatchSampler

shuffle(bool类):是否打乱数据顺序

sampler(Sampler类):同BatchSampler

batch_sampler(BatchSampler类)

drop_last(bool类):同BatchSampler

重要函数:

4.3、_DataLoaderIter

__init__初始化并指定了sampler_iter,即batch_sampler

def __init__(self, loader):
        self.dataset = loader.dataset
        self.collate_fn = loader.collate_fn
        self.batch_sampler = loader.batch_sampler
        self.num_workers = loader.num_workers
        self.pin_memory = loader.pin_memory and torch.cuda.is_available()
        self.timeout = loader.timeout
        self.done_event = threading.Event()
 
        self.sample_iter = iter(self.batch_sampler)
        # ...

_get_batch读取数据,加入了连接超时的判断

5、另一种元学习数据集构建与训练

5.1、另一种构建数据集方法

代码链接:dragen1860/MAML-Pytorch: Elegant PyTorch implementation of paper Model-Agnostic Meta-Learning (MAML) (github.com)

Image.open(x).convert(‘RGB‘)-CSDN博客

如果不使用.convert(‘RGB’)进行转换的话,读出来的图像是RGBA四通道的,A通道为透明通道,该对深度学习模型训练来说暂时用不到,因此使用convert(‘RGB’)进行通道转换

import os
import torch
from torch.utils.data import Dataset
from torchvision.transforms import transforms
import numpy as np
import collections
from PIL import Image
import csv
import random

# MiniImagenet  小样本学习数据集,5-way-5-shot


class MiniImagenet(Dataset):
    """
    put mini-imagenet files as :
    root :
        |- images/*.jpg includes all imgeas
        |- train.csv
        |- test.csv
        |- val.csv
    NOTICE: meta-learning is different from general supervised learning, especially the concept of batch and set.
    # batch: contains several sets
    # sets: conains n_way * k_shot for meta-train set, n_way * n_query for meta-test set.
    """

    def __init__(self, root, mode, batchsz, n_way, k_shot, k_query, resize, startidx=0):
        """

        :param root: root path of mini-imagenet
        :param mode: train, val or test
        :param batchsz: batch size of sets, not batch of imgs
        :param n_way:
        :param k_shot:
        :param k_query: num of qeruy imgs per class
        :param resize: resize to
        :param startidx: start to index label from startidx
        """

        self.batchsz = batchsz  # batch of set, not batch of imgs
        self.n_way = n_way  # n-way
        self.k_shot = k_shot  # k-shot
        self.k_query = k_query  # for evaluation
        self.setsz = self.n_way * self.k_shot  # num of samples per set
        self.querysz = self.n_way * self.k_query  # number of samples per set for evaluation
        self.resize = resize  # resize to
        self.startidx = startidx  # index label not from 0, but from startidx
        print('shuffle DB :%s, b:%d, %d-way, %d-shot, %d-query, resize:%d' % (
        mode, batchsz, n_way, k_shot, k_query, resize))

        if mode == 'train':
            self.transform = transforms.Compose([lambda x: Image.open(x).convert('RGB'),
                                                 transforms.Resize((self.resize, self.resize)),
                                                 # transforms.RandomHorizontalFlip(),
                                                 # transforms.RandomRotation(5),
                                                 transforms.ToTensor(),
                                                 transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
                                                 ])
        else:
            self.transform = transforms.Compose([lambda x: Image.open(x).convert('RGB'),
                                                 transforms.Resize((self.resize, self.resize)),
                                                 transforms.ToTensor(),
                                                 transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
                                                 ])

        self.path = os.path.join(root, 'images')  # image path
        csvdata = self.loadCSV(os.path.join(root, mode + '.csv'))  # csv path
        self.data = []
        self.img2label = {}
        for i, (k, v) in enumerate(csvdata.items()):
            self.data.append(v)  # [[img1, img2, ...], [img111, ...]]  # 图片标记名称,和上个原学习数据集构建有异曲同工之妙
            self.img2label[k] = i + self.startidx  # {"img_name[:9]":label}
        self.cls_num = len(self.data)  # 数据种类

        self.create_batch(self.batchsz)

    def loadCSV(self, csvf):
        """
        return a dict saving the information of csv
        # :param splitFile: csv file name
        # :return: {label:[file1, file2 ...]}
        """
        dictLabels = {}
        with open(csvf) as csvfile:
            csvreader = csv.reader(csvfile, delimiter=',')
            next(csvreader, None)  # skip (filename, label)
            for i, row in enumerate(csvreader):
                filename = row[0]
                label = row[1]
                # append filename to current label
                if label in dictLabels.keys():
                    dictLabels[label].append(filename)  # 字典,键值对
                else:
                    dictLabels[label] = [filename]  # 创建一个键
        return dictLabels

    def create_batch(self, batchsz):
        """
        create batch for meta-learning.
        # ×:episode× here means batch, and it means how many sets we want to retain.
        # :param episodes: batch size
        :return:
        """
        self.support_x_batch = []  # support set batch
        self.query_x_batch = []  # query set batch
        for b in range(batchsz):  # for each batch
            # 1.select n_way classes randomly
            selected_cls = np.random.choice(self.cls_num, self.n_way, False)  # no duplicate
            np.random.shuffle(selected_cls)
            support_x = []
            query_x = []
            for cls in selected_cls:
                # 2. select k_shot + k_query for each class
                selected_imgs_idx = np.random.choice(len(self.data[cls]), self.k_shot + self.k_query, False)
                np.random.shuffle(selected_imgs_idx)
                indexDtrain = np.array(selected_imgs_idx[:self.k_shot])  # idx for Dtrain
                indexDtest = np.array(selected_imgs_idx[self.k_shot:])  # idx for Dtest
                support_x.append(
                    np.array(self.data[cls])[indexDtrain].tolist())  # get all images filename for current Dtrain
                query_x.append(np.array(self.data[cls])[indexDtest].tolist())

            # shuffle the correponding relation between support set and query set
            random.shuffle(support_x)
            random.shuffle(query_x)

            self.support_x_batch.append(support_x)  # append set to current sets
            self.query_x_batch.append(query_x)  # append sets to current sets

    def __getitem__(self, index):
        """
        index means index of sets, 0<= index <= batchsz-1
        # :param index:
        # :return:
        """
        # [setsz, 3, resize, resize]
        support_x = torch.FloatTensor(self.setsz, 3, self.resize, self.resize)
        # [setsz]
        support_y = np.zeros((self.setsz), dtype=np.int)
        # [querysz, 3, resize, resize]
        query_x = torch.FloatTensor(self.querysz, 3, self.resize, self.resize)
        # [querysz]
        query_y = np.zeros((self.querysz), dtype=np.int)

        flatten_support_x = [os.path.join(self.path, item)
                             for sublist in self.support_x_batch[index] for item in sublist]
        support_y = np.array(
            [self.img2label[item[:9]]  # filename:n0153282900000005.jpg, the first 9 characters treated as label
             for sublist in self.support_x_batch[index] for item in sublist]).astype(np.int32)

        flatten_query_x = [os.path.join(self.path, item)
                           for sublist in self.query_x_batch[index] for item in sublist]
        query_y = np.array([self.img2label[item[:9]]
                            for sublist in self.query_x_batch[index] for item in sublist]).astype(np.int32)

        # print('global:', support_y, query_y)
        # support_y: [setsz]
        # query_y: [querysz]
        # unique: [n-way], sorted
        unique = np.unique(support_y)
        random.shuffle(unique)  # 返回唯一值
        # relative means the label ranges from 0 to n-way
        support_y_relative = np.zeros(self.setsz)
        query_y_relative = np.zeros(self.querysz)
        for idx, l in enumerate(unique):
            support_y_relative[support_y == l] = idx  # 标签重新标注
            query_y_relative[query_y == l] = idx   # 标签重新标注

        # print('relative:', support_y_relative, query_y_relative)

        for i, path in enumerate(flatten_support_x):
            support_x[i] = self.transform(path)

        for i, path in enumerate(flatten_query_x):
            query_x[i] = self.transform(path)
        # print(support_set_y)
        # return support_x, torch.LongTensor(support_y), query_x, torch.LongTensor(query_y)

        return support_x, torch.LongTensor(support_y_relative), query_x, torch.LongTensor(query_y_relative)

    def __len__(self):
        # as we have built up to batchsz of sets, you can sample some small batch size of sets.
        return self.batchsz


if __name__ == '__main__':
    # the following episode is to view one set of images via tensorboard.
    from torchvision.utils import make_grid
    from matplotlib import pyplot as plt
    from tensorboardX import SummaryWriter
    import time

    plt.ion()

    tb = SummaryWriter('runs', 'mini-imagenet')
    mini = MiniImagenet('../mini-imagenet/', mode='train', n_way=5, k_shot=1, k_query=1, batchsz=1000, resize=168)

    for i, set_ in enumerate(mini):
        # support_x: [k_shot*n_way, 3, 84, 84]
        support_x, support_y, query_x, query_y = set_

        support_x = make_grid(support_x, nrow=2)
        query_x = make_grid(query_x, nrow=2)

        plt.figure(1)
        plt.imshow(support_x.transpose(2, 0).numpy())
        plt.pause(0.5)
        plt.figure(2)
        plt.imshow(query_x.transpose(2, 0).numpy())
        plt.pause(0.5)

        tb.add_image('support_x', support_x)
        tb.add_image('query_x', query_x)

        time.sleep(5)

    tb.close()

在上述代码中,其中的代码def __getitem__(self, index):  中index的范围:因此,__getitem__方法的索引范围与数据集中的样本数量相对应,为def __len__(self): return len(self.samples)中的self.sample从0到len(self.samples) - 1

miniimagenet数据集标签csv内容为图片的标记与标签  与图片的标签数据集的结构为:

├── mini-imagenet: 数据集根目录
     ├── images: 所有的图片都存在这个文件夹中
     ├── train.csv: 对应训练集的标签文件
     ├── val.csv: 对应验证集的标签文件
     └── test.csv: 对应测试集的标签文件

Mini-Imagenet数据集中还包含了train.csv、val.csv以及test.csv三个文件。需要注意的是,当时作者制作这个数据集时主要是针对小样本学习领域的,而且提供的标签文件并不是从每个类别中进行采样的。我自己用pandas包分析了下每个标签文件。

train.csv包含38400张图片,共64个类别。
val.csv包含9600张图片,共16个类别。
test.csv包含12000张图片,共20个类别。
每个csv文件之间的图像以及类别都是相互独立的,即共60000张图片,100个类。

用pandas读取的csv文件数据格式如下,每一行对应一张图片的名称和所属类别:

                filename      label
0  n0153282900000005.jpg  n01532829
1  n0153282900000006.jpg  n01532829
2  n0153282900000007.jpg  n01532829
3  n0153282900000010.jpg  n01532829
4  n0153282900000014.jpg  n01532829

5.2、5.1方法是MAML元学习真正的方法

  • 3
    点赞
  • 47
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值