黄金时代 —— Pytorch学习记录(一)

Tensor

Tensor操作

  • 100多个函数
  • randn, add_, view, numel

桥接 NumPy

  • Torch张量和NumPy数组将共享它们的底层内存位置,因此当一个改变时,另外也会改变:b = a.numpy(); a = torch.from_numpy(b)

Cuda张量

import numpy as np
import torch, os
os.environ["CUDA_VISIBLE_DEVICES"] ="1"
if __name__ == '__main__':
    x = torch.randn(5,3)
    if torch.cuda.is_available():
        device = torch.device('cuda')
        y = torch.ones_like(x, device=device)
        x = x.to(device)
        z = x + y
        print(z)
        print(z.to('cpu', torch.double))

Autograd:自动求导

张量

  • 核心类:torch.Tensor
  • 属性:.requires_grad: True/False backward() BP过程 .grad 这个张量的所有梯度将会自动累加于此
  • detach() 将张量与计算历史分离,并阻止它未来的计算记录被跟踪。
  • PS:为了防止跟踪历史记录(和使用内存),可以将代码块包装在 with torch.no_grad(): 中。在评估模型时特别有用,因为模型可能具有 requires_grad = True 的可训练的参数,但是我们不需要在此过程中对他们进行梯度计算。
  • 关键类:Function:指的是在计算图中某个节点(node)所进行的运算,比如加减乘除卷积等等之类的,Function 内部有 forward() 和 backward() 两个方法,分别应用于正向、反向传播.
    a = torch.tensor(2.0, requires_grad=True)
    b = a.exp()
    print(b)
    # tensor(7.3891, grad_fn=<ExpBackward>)
    
  • 每个张量都有一个 .grad_fn 属性,该属性引用了创建 Tensor 自身的Function(除非这个张量是用户手动创建的,即这个张量的 grad_fn 是 None )
  • Pytorch中的计算图(acyclic graph):Tensor + Function

梯度

x = torch.ones(2, 2, requires_grad=True)
y = x + 2
z = y * y * 3
out = z.mean()
out.backward()
print(x.grad)
>>> tensor([[4.5000, 4.5000],
        [4.5000, 4.5000]])

就可以得到 o = 1 4 ∑ i z i , z i = 3 ( x i + 2 ) 2 o=\frac{1}{4} \sum_{i} z_{i},z_{i}=3\left(x_{i}+2\right)^{2} o=41izi,zi=3(xi+2)2因此, ∂ o ∂ x i = 3 2 ( x i + 2 ) , ∂ o ∂ x i ∣ x i = 1 = 9 2 = 4.5 \frac{\partial o}{\partial x_{i}}=\frac{3}{2}\left(x_{i}+2\right), \left.\frac{\partial o}{\partial x_{i}}\right|_{x_{i}=1}=\frac{9}{2}=4.5 xio=23(xi+2),xioxi=1=29=4.5

  • torch.autograd 是计算雅可比向量积的一个“引擎”:
    • y ⃗ = f ( x ⃗ ) \vec{y}=f(\vec{x}) y =f(x )
    • J = ( ∂ y 1 ∂ x 1 ⋯ ∂ y m ∂ x 1 ⋮ ⋱ ⋮ ∂ y 1 ∂ x n ⋯ ∂ y m ∂ x n ) J=\left(\begin{array}{ccc}\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\ \vdots & \ddots & \vdots \\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}\end{array}\right) J=x1y1xny1x1ymxnym
    • l = g ( y ⃗ ) l=g(\vec{y}) l=g(y )
    • 雅可比向量积应该是 l l l x ⃗ \vec{x} x 的导数:
      J T ⋅ v = ( ∂ y 1 ∂ x 1 ⋯ ∂ y m ∂ x 1 ⋮ ⋱ ⋮ ∂ y 1 ∂ x n ⋯ ∂ y m ∂ x n ) ( ∂ l ∂ y 1 ⋮ ∂ l ∂ y m ) = ( ∂ l ∂ x 1 ⋮ ∂ l ∂ x n ) J^{T} \cdot v=\left(\begin{array}{ccc}\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\ \vdots & \ddots & \vdots \\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}\end{array}\right)\left(\begin{array}{c}\frac{\partial l}{\partial y_{1}} \\ \vdots \\ \frac{\partial l}{\partial y_{m}}\end{array}\right)=\left(\begin{array}{c}\frac{\partial l}{\partial x_{1}} \\ \vdots \\ \frac{\partial l}{\partial x_{n}}\end{array}\right) JTv=x1y1xny1x1ymxnymy1lyml=x1lxnl
  • 当out也是矩阵的时候,就不能直接out.backward(),会报错!需要将out变成标量,再求导!
  • 所以有:
    x = torch.ones(2,requires_grad=True)
    z = x + 2
    z.backward(torch.ones_like(z)) # grad_tensors需要与输入tensor大小一致
    print(x.grad)
    
  • 上面的backward函数里面,即参数grad_tensors,就是对z进行点乘,其实就是取了个sum!
  • grad_tensors的作用其实可以简单地理解成在求梯度时的权重,因为可能不同值的梯度对结果影响程度不同,所以pytorch弄了个这种接口,而没有固定为全是1 简单来说,就是加权和
  • 其他参数:
    • retain_graph: True 则保留计算图, False则释放计算图。这个在有多个loss需要反传的时候会有用,因为backward/autograd.grad()会默认retain_graph=False以提高效率。所以反传非最后一个loss之前都要设置retain_graph=True,否则会报错!
    In[3]: import torch
      ...: from torch.autograd import Variable
      ...: x = torch.randn((1,4),dtype=torch.float32,requires_grad=True)
      ...: y = x ** 2
      ...: z = y * 4
      ...: output1 = z.mean()
      ...: output2 = z.sum()
      ...: output1.backward(retain_graph=True)   # 这里参数表明保留backward后的中间参数。
      ...: output2.backward()
    
    • create_graph: 若要计算高阶导数,则必须选为True:通过设置 create_graph=True 来计算二阶导数。本质意义是 在正向建立原计算图的时候的同时,建立一个其一阶导数的计算图.
    • allow_unused: 允许输入变量不进入计算
  • autograd.grad类似:
    from torch import autograd
    x = torch.rand(3, 4, requires_grad=True)
    y = x*2
    grads = autograd.grad(outputs=y, inputs=x, grad_outputs = torch.ones(3,4))
    

定义网络

  • torch.nn 构建网络
  • 依赖autograd计算梯度,依赖继承nn.Module
  • 自定义网络中包含各层的定义,加上一个forward()方法 BP过程隐含在Module中!
  • Pytorch中的网络训练流程:
1 定义包含一些可学习参数(或者叫权重)的神经网络
2 在输入数据集上迭代
3 通过网络处理输入
4 计算损失(输出和正确答案的距离)
5 将梯度反向传播给网络的参数
6 更新网络的权重,一般使用一个简单的规则(梯度下降):
	weight = weight - learning_rate * gradient

关于nn和nn.Module模块

  • torch.nn 和 torch.nn.functional 的区别:一个是类封装,一个是函数接口,前者定义的模块名首字母大写,后者都是小写
  • 至于为什么保留两个如此相似的模块?如果我们只保留nn.functional下的函数的话,在训练或者使用时,我们就要手动去维护weight, bias, stride这些中间量的值,这显然是给用户带来了不便。而如果我们只保留nn下的类的话,其实就牺牲了一部分灵活性,因为做一些简单的计算都需要创造一个类,这也与PyTorch的风格不符。
  • 具体区别:1 两者调用方法不同,前者是类,要实例化,主要参数是网络的尺寸之类 2 类封装的可以和nn.Sequential结合使用,而functional不能 3 类不需要自己定义管理weight???,而函数接口要!每次调用的时候都需要手动传入weight, 不利于代码复用。PS:看起来weight更加关注于尺寸问题:
  • 类接口维护层,函数接口维护层参数param
    class CNN(nn.Module):
    
        def __init__(self):
            super(CNN, self).__init__()
            
            self.cnn1 = nn.Conv2d(in_channels=1,  out_channels=16, kernel_size=5,padding=0)
            self.relu1 = nn.ReLU()
            self.maxpool1 = nn.MaxPool2d(kernel_size=2)
            
            self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5,  padding=0)
            self.relu2 = nn.ReLU()
            self.maxpool2 = nn.MaxPool2d(kernel_size=2)
            
            self.linear1 = nn.Linear(4 * 4 * 32, 10)
            
        def forward(self, x):
            x = x.view(x.size(0), -1)
            out = self.maxpool1(self.relu1(self.cnn1(x)))
            out = self.maxpool2(self.relu2(self.cnn2(out)))
            out = self.linear1(out.view(x.size(0), -1))
            return out
    
    class CNN(nn.Module):
        
        def __init__(self):
            super(CNN, self).__init__()
            
            self.cnn1_weight = nn.Parameter(torch.rand(16, 1, 5, 5))
            self.bias1_weight = nn.Parameter(torch.rand(16))
            
            self.cnn2_weight = nn.Parameter(torch.rand(32, 16, 5, 5))
            self.bias2_weight = nn.Parameter(torch.rand(32))
            
            self.linear1_weight = nn.Parameter(torch.rand(4 * 4 * 32, 10))
            self.bias3_weight = nn.Parameter(torch.rand(10))
            
        def forward(self, x):
            x = x.view(x.size(0), -1)
            out = F.conv2d(x, self.cnn1_weight, self.bias1_weight)
            out = F.relu(out)
            out = F.max_pool2d(out)
            
            out = F.conv2d(x, self.cnn2_weight, self.bias2_weight)
            out = F.relu(out)
            out = F.max_pool2d(out)
            
            out = F.linear(x, self.linear1_weight, self.bias3_weight)
            return out
    
  • PyTorch官方推荐:具有学习参数的(例如,conv2d, linear, batch_norm)采用nn.Xxx方式,没有学习参数的(例如,maxpool, loss func, activation func)等根据个人选择使用nn.functional.xxx或者nn.Xxx方式。
  • 关于dropout,个人强烈推荐使用nn.Xxx方式,因为一般情况下只有训练阶段才进行dropout,在eval阶段都不会进行dropout。使用nn.Xxx方式定义dropout,在调用model.eval()之后,model中所有的dropout layer都关闭,但以nn.function.dropout方式定义dropout,在调用model.eval()之后并不能关闭dropout。
  • 类接口更纯粹 函数接口更灵活 OVER

网络

import numpy as np
import torch, os
import torch.nn as nn
import torch.nn.functional as F
os.environ["CUDA_VISIBLE_DEVICES"] ="1"

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
    
    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x)) # x.view(x.size(0), -1)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]
        nums_f = 1
        for s in size:
            nums_f *= s
        return nums_f

if __name__ == '__main__':
    net = Net()
    print(net)
  • 一个模型的可学习参数可以通过net.parameters()返回
    params = list(net.parameters())
    print(len(params)) # 10 卷积层每个通道一个偏置 全连接层一个节点一个偏置
    print(params[0].size()) # torch.Size([6, 1, 5, 5])
    
  • 这个网络由于FC存在,所以对图像尺寸有限制,32x32x1:padding=0(默认值),推导如下 ((32 - 5 + 1) / 2 - 5 + 1) / 2 = 5 >> 全连接层的尺寸!

BP过程

  • 记得清零所有参数的梯度缓存!
    net.zero_grad()
    out.backward(torch.randn(1, 10))
    
  • torch.nn包只支持小批量样本的输入,不支持单个样本
    • nn.Conv2d 接受一个4维的张量,即nSamples x nChannels x Height x Width
    • 单个样本,通过input.unsqueeze(0)添加一维!

损失函数

input = torch.randn(1, 1, 32, 32)
output = net(input)
target = torch.randn(1, 10)
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss) # tensor(1.0263, grad_fn=<MseLossBackward>)
  • 使用.grad_fn属性反向跟踪,得到计算图如下:

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss

  • loss.grad_fn.next_functions[0][0]

反向传播

net.zero_grad()
print(net.conv1.bias.grad)
loss.backward()
print(net.conv1.bias.grad)

更新权重(GD)

  • 直接用python更新网络的参数
lr = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * lr)
  • torch.optim
import torch.optim as optim
opti = optim.SGD(net.parameters(), lr=0.01)
opti.zero_grad()
output = net(input)
loss = criterion(output, target)
loss.backward()
opti.step()

案例CIFAR10

准备数据

  • 一般思路:将数据(图像/文本/音频/视频)加载到numpy中,再转化为torch.Tensor()
  • 图片 Pillow / OpenCV
  • 音频 scipy/librosa
  • 文本 NLTK/SpaCy
  • 尤其针对视觉方面 torchvision,包含imagenet/CIFAR10/MNIST等数据集的加载(Data Loaders) + 数据增强内容;torchvision.datasetstorch.utils.data.DataLoader

训练分类器

  • 流程:1 torchvision加载CIFAR10数据集+数据标准化 2 定义卷积神经网络 3 定义损失函数 4 训练数据训练 5 测试数据测试
import numpy as np
import torch, torchvision, os
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
# os.environ["CUDA_VISIBLE_DEVICES"] = "1"
toTrain = True

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3,6,5)
        self.pool = nn.MaxPool2d(2,2)
        self.conv2 = nn.Conv2d(6,16,5)
        self.fc1 = nn.Linear(16*5*5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
    
    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
    
if __name__ == '__main__':
    # torch.cuda.set_device(1)
    transform = transforms.Compose(
        [transforms.ToTensor(), 
         transforms.Normalize(mean=(0.5, 0.5, 0.5),std=(0.5, 0.5, 0.5))])
    test_set = torchvision.datasets.CIFAR10(root='./Dataset', train=False, 
        download=True, transform=transform)
    test_loader = torch.utils.data.DataLoader(test_set, batch_size=32, 
        shuffle=False, num_workers=2)
    classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 
        'ship', 'truck')

    if toTrain:
        train_set = torchvision.datasets.CIFAR10(root='./Dataset',
            train=True, download=True, transform=transform)
        train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, 
            shuffle=True, num_workers=2)
        # Network
        device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
        net = Net()
        net.to(device)
        criterion = nn.CrossEntropyLoss()
        optimizer = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9)
        # Train
        for epoch in range(20):
            for i, data in enumerate(train_loader, 0):
                inputs, labels = data
                inputs, labels = inputs.to(device), labels.to(device)
                optimizer.zero_grad()
                outputs = net(inputs)
                loss = criterion(outputs, labels)
                loss.backward()
                optimizer.step()
                print('Iter: %d; Loss: %0.3f' % (i, loss.item()))
        print('Finished Training!')
        # 保存整个网络
        torch.save(net.state_dict(), 'test.pth') 

    net = Net()
    net.load_state_dict(torch.load('test.pth'))
    class_correct = list(0. for i in range(10))
    class_total = list(0. for i in range(10))
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = net(images)
            _, predicted = torch.max(outputs, 1)
            c = (predicted == labels).squeeze()
            for i in range(4):
                label = labels[i]
                class_correct[label] += c[i].item()
                class_total[label] += 1
    for i in range(10):
        print('Accuracy of %5s : %2d %%' % (
            classes[i], 100 * class_correct[i] / class_total[i]))
Accuracy of plane : 75 %
Accuracy of   car : 73 %
Accuracy of  bird : 49 %
Accuracy of   cat : 31 %
Accuracy of  deer : 48 %
Accuracy of   dog : 63 %
Accuracy of  frog : 69 %
Accuracy of horse : 75 %
Accuracy of  ship : 68 %
Accuracy of truck : 83 %

数据并行(多GPU)

  • 要把它赋值给一个新的张量并在GPU上使用这个张量
  • model = nn.DataParallel(model)

序列数据集

  • 注意__getitem__方法,是为了实现下标访问!
class RandomDataset(Dataset):

    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len

rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),
	batch_size=batch_size, shuffle=True)

测试

device = torch.device("cuda: 0" if torch.cuda.is_available() else "cpu")
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1: 
  print("Let's use", torch.cuda.device_count(), "GPUs!")
  # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
  model = nn.DataParallel(model)
model.to(device)

# Test
for data in rand_loader: 
    input = data.to(device)
    output = model(input)
    print("Outside: input size", input.size(),
          "output_size", output.size())
>>> 
Let's use 2 GPUs!
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
    In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])

数据加载/处理 教程 (面部轮廓数据集)

  • scikit-image:用于图像io和变换
  • pandas:为了更方便地处理csv文件

数据集类

  • torch.utils.data.Dataset 是表示数据集的抽象类
  • 自定义数据集均需继承于Dataset!
  • __len____getitem__:前者使得len(dataset)可返回数据集的大小;后者使得dataset[i]可以用来获取第i个样本(支持索引) !
  • 本章创建一个数据集类!__init__中读取csv文件,在__getitem__中读取图像!按需读取!
  • 将数据集中的样本定义为:{'image': image, 'landmarks': landmarks}
  • 定义一个数据增强方法 transform
class FaceLandmarkDataset(Dataset):
    '''Face Landmarks Dataset'''
    
    def __init__(self, csv_file, root_dir, transform=None):
        """
        Args:
            csv_file (string): Path to the csv file with annotations.
            root_dir (string): Directory with all the images.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        """
        self.landmarks_frame = pd.read_csv(csv_file)
        self.root_dir = root_dir
        self.transform = transform

    def __len__(self):
        return len(self.landmarks_frame)
    
    def __getitem__(self, idx):
        if torch.is_tensor(idx):
            idx = idx.tolist()
        img_name = os.path.join(self.root_dir, 
                                self.landmarks_frame.iloc[idx, 0])
        image = io.imread(img_name)
        landmarks = self.landmarks_frame.iloc[idx, 1:]
        landmarks = np.array([landmarks])
        landmarks = landmarks.astype('float').reshape(-1, 2)
        sample = {'image': image, 'landmarks': landmarks}
        if self.transform:
            sample = self.transform(sample)
        return sample

转换(数据增强)

  • 以三个转换为例:
    • Rescale:图像缩放
    • RandomCrop:从图像中随机裁剪。这是数据扩充。
    • ToTensor:将numpy格式的图片转换为torch格式的图片(我们需要换轴)
  • 前置知识:
    • __init__()__call__()的区别如下:
      • __init__()的作用是初始化某个类的一个实例。
      • __call__()的作用是使实例能够像函数一样被调用,同时不影响实例本身的生命周期(__call__()不影响一个实例的构造和析构)。但是__call__()可以用来改变实例的内部成员的值
class RandomCrop(object):
    """Crop randomly the image in a sample.
        Args:
            output_size (tuple or int): Desired output size. If int, square crop
                is made.
    """

    def __init__(self, output_size):
        assert isinstance(output_size, (int, tuple))
        if isinstance(output_size, int):
            self.output_size = (output_size, output_size)
        else:
            assert len(output_size) == 2
            self.output_size = output_size

    def __call__(self, sample):
        image, landmarks = sample['image'], sample['landmarks']
        h, w = image.shape[:2]
        new_h, new_w = self.output_size
        top = np.random.randint(0, h - new_h)
        left = np.random.randint(0, w - new_w)
        image = image[top: top + new_h,
                      left: left + new_w]
        landmarks = landmarks - [left, top]
        return {'image': image, 'landmarks': landmarks}

class ToTensor(object):
    """Convert ndarrays in sample to Tensors."""

    def __call__(self, sample):
        image, landmarks = sample['image'], sample['landmarks']
        # swap color axis because
        # numpy image: H x W x C
        # torch image: C X H X W
        image = image.transpose((2, 0, 1))
        return {'image': torch.from_numpy(image),
                'landmarks': torch.from_numpy(landmarks)}

组合转换

transform=transforms.Compose([Rescale(256), 
RandomCrop(224), ToTensor()])

遍历

for i in range(len(transformed_dataset)):
    sample = transformed_dataset[i]
    print(i, sample['image'].size(), sample['landmarks'].size())
    if i == 3:
        break
  • 上述单纯的遍历,会损失:
批量处理数据
整理数据
使用multiprocessing并行加载数据。
  • 使用
class torch.utils.data.DataLoader(
    dataset, # torch.utils.data.Dataset
    batch_size=1, # batch_size
    shuffle=False, # shuffle dataset in each epoch
    sampler=None, # 和shffle互斥 采样函数
    batch_sampler=None, # 和batch_size以及shuffle互斥 批采样函数
    num_workers=0, # 导入数据的子进程数 0表示使用主进程 
    collate_fn=<function default_collate>, # 对由dataset[i]构成的batch_size长度的list进行封装 以便后续迭代过程中每个batch的训练
    pin_memory=False, # 如果为True,则数据加载器将张量复制到CUDA固定的内存中,然后返回它们
    drop_last=False, # 分batch后剩下的数据是否抛弃
    timeout=0, # 数据读取的超时时间
    worker_init_fn=None) # 每个子进程的id初始化函数
  • 参数:
    • shuffle:设置为True的时候,每个epoch都会打乱数据集,
    • collate_fn:如何取样本的,我们可以定义自己的函数来准确地实现想要的功能
    • drop_last:告诉如何处理数据集长度除于batch_size余下的数据。True就抛弃,否则保留
    • sampler:
      • RandomSampler == shuffle is True;
      • SequentialSampler == shuffle is False
from __future__ import print_function, division # py3的print和/操作
import os
import torch
import pandas as pd
import numpy as np
from skimage import io, transform
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import warnings

# Helper function to show a batch
def show_landmarks_batch(sample_batched):
    """Show image with landmarks for a batch of samples."""
    images_batch, landmarks_batch = \
            sample_batched['image'], sample_batched['landmarks']
    batch_size = len(images_batch)
    im_size = images_batch.size(2)
    grid_border_size = 2

    grid = utils.make_grid(images_batch)
    plt.imshow(grid.numpy().transpose((1, 2, 0)))

    for i in range(batch_size):
        plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size + (i + 1) * grid_border_size,
                    landmarks_batch[i, :, 1].numpy() + grid_border_size,
                    s=10, marker='.', c='r')

        plt.title('Batch from dataloader')

if __name__ == '__main__':
    warnings.filterwarnings("ignore")
    plt.ion() # interactive mode
    landmarks_frame = pd.read_csv('Dataset/faces/face_landmarks.csv')
    n = 65
    img_name = landmarks_frame.iloc[n, 0]
    landmarks = landmarks_frame.iloc[n, 1:].to_numpy()
    landmarks = landmarks.astype('float').reshape(-1, 2)
    print('Image name: {}'.format(img_name))
    print('Landmarks shape: {}'.format(landmarks.shape))
    print('First 4 Landmarks: {}'.format(landmarks[:4]))

    transformed_dataset = FaceLandmarkDataset(csv_file='Dataset/faces/face_landmarks.csv',
                                           root_dir='Dataset/faces/',
                                           transform=transforms.Compose([
                                               Rescale(256),
                                               RandomCrop(224),
                                               ToTensor()
                                           ]))
    dataloader = DataLoader(transformed_dataset, batch_size=4,
                            shuffle=True, num_workers=4)
    for i_batch, sample_batched in enumerate(dataloader):
        print(i_batch, sample_batched['image'].size(), 
                    sample_batched['landmarks'].size())

更方便的数据处理torchvision

import torch
from torchvision import transforms, datasets
data_transform = transform.Compose([
    transforms.RandomSizedCrop(224),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485,0.456,0.406],
                         std=[0.229,0.224,0.225]),
])
transformed_data = datasets.ImageFolder(root='Dataset/train',
    transform=data_transform)
dataset_loader = torch.utils.data.DataLoader(transformed_data, batch_size=4, 
    shuffle=True, num_workers=4)

从Python到Pytorch

Tensor

  • 简单的三层全连接层使用Python写!> Numpy
# -*- coding: utf-8 -*-
import numpy as np
batch_size, D_in, H_dim, D_out = 64, 1000, 100, 10
x = np.random.randn(batch_size, D_in)
y = np.random.randn(batch_size, D_out)
w1 = np.random.randn(D_in, H_dim)
w2 = np.random.randn(H_dim, D_out)
lr = 1e-6
for t in range(500):
    h = x.dot(w1)
    h_relu = np.maximum(h, 0)
    y_pred = h_relu.dot(w2)
    loss = np.square(y_pred - y).sum()
    # loss = pow(y_pred - y, 2).sum()
    print(t, loss)
    # BP
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.T.dot(grad_y_pred)
    grad_h_relu = grad_y_pred.dot(w2.T)
    grad_h = grad_h_relu.copy()
    grad_h[h < 0] = 0
    grad_w1 = x.T.dot(grad_h)
    # Update
    w1 -= lr * grad_w1
    w2 -= lr * grad_w2
  • Numpy > Torch,GPU加速
# -*- coding: utf-8 -*-

import torch


dtype = torch.float
# device = torch.device("cpu")
device = torch.device("cuda:0") # Uncomment this to run on GPU

# N是批尺寸大小; D_in 是输入维度;
# H 是隐藏层维度; D_out 是输出维度
N, D_in, H, D_out = 64, 1000, 100, 10

# 产生随机输入和输出数据
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# 随机初始化权重
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(500):
    # 前向传播:计算预测值y
    h = x.mm(w1)
    h_relu = h.clamp(min=0)
    y_pred = h_relu.mm(w2)

    # 计算并输出loss
    loss = (y_pred - y).pow(2).sum().item()
    print(t, loss)

    # 反向传播,计算w1、w2对loss的梯度
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu = grad_y_pred.mm(w2.t())
    grad_h = grad_h_relu.clone()
    grad_h[h < 0] = 0
    grad_w1 = x.t().mm(grad_h)

    # 使用梯度下降更新权重
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2
  • PS: T>t(); dot()>mm(); copy()>clone()

自动求导

  • model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval mode instead of training mode.
  • torch.no_grad() impacts the autograd engine and deactivate it. It will reduce memory usage and speed up computations but you won’t be able to backprop (which you don’t want in an eval script).

autograd

# -*- coding: utf-8 -*-
import torch

dtype = torch.float
# device = torch.device("cpu")
device = torch.device("cuda:0") # Uncomment this to run on GPU

# N是批尺寸大小; D_in 是输入维度;
# H 是隐藏层维度; D_out 是输出维度
N, D_in, H, D_out = 64, 1000, 100, 10

# 产生随机输入和输出数据
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# 随机初始化权重
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(500):
    # 前向传播:计算预测值y
    y_pred = x.mm(w1).clamp(min=0).mm(w2)

    # 计算并输出loss
    loss = (y_pred - y).pow(2).sum()
    print(t, loss.item()) # 打印标量

    # 反向传播,计算w1、w2对loss的梯度
    loss.backward()

    # 使用梯度下降更新权重
    # 只想对w1和w2的值进行原地改变;不想为更新阶段构建计算图,
    with torch.no_grad():
        w1 -= learning_rate * w1.grad
        w2 -= learning_rate * w2.grad
        # 反向传播之后手动将梯度置零
        w1.grad.zero_()
        w2.grad.zero_()

自定义求导

  • 在底层,每一个原始的自动求导运算实际上是两个在Tensor上运行的函数。其中,forward函数计算从输入Tensors获得的输出Tensors。而backward函数接收输出Tensors对于某个标量值的梯度,并且计算输入Tensors相对于该相同标量值的梯度。
  • 所以自定义就要求,继承实现这两个方法 torch.autograd.Function
# -*- coding: utf-8 -*-
import torch

class MyReLU(torch.autograd.Function): # !!!
    """
    我们可以通过建立torch.autograd的子类来实现我们自定义的autograd函数,并完成张量的正向和反向传播。
    """

    @staticmethod
    def forward(ctx, input):
        """
        在前向传播中,我们收到包含输入和返回的张量包含输出的张量。 
        ctx是可以使用的上下文对象存储信息以进行向后计算。 
        您可以使用ctx.save_for_backward方法缓存任意对象,以便反向传播使用。
        """
        ctx.save_for_backward(input)
        return input.clamp(min=0)

    @staticmethod
    def backward(ctx, grad_output):
        """
        在反向传播中,我们接收到上下文对象和一个张量,其包含了相对于正向传播过程中产生的输出的损失的梯度。
        我们可以从上下文对象中检索缓存的数据,并且必须计算并返回与正向传播的输入相关的损失的梯度。
        """
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[input < 0] = 0
        return grad_input

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# N是批尺寸大小; D_in 是输入维度;
# H 是隐藏层维度; D_out 是输出维度
N, D_in, H, D_out = 64, 1000, 100, 10

# 产生输入和输出的随机张量
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)

# 产生随机权重的张量
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(500):
    # 为了使用我们的方法,我们调用Function.apply方法。 我们将其命名为“ relu”。
    relu = MyReLU.apply # !!!

    # 正向传播:使用张量上的操作来计算输出值y;
    # 我们使用自定义的自动求导操作来计算 RELU.
    y_pred = relu(x.mm(w1)).mm(w2)

    # 计算并输出loss
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # 使用autograd计算反向传播过程。
    loss.backward()

    # 用梯度下降更新权重
    with torch.no_grad():
        w1 -= learning_rate * w1.grad
        w2 -= learning_rate * w2.grad

        # 在反向传播之后手动清零梯度
        w1.grad.zero_()
        w2.grad.zero_()

静态图和动态图

  • TensorFlow的计算图是静态的,而PyTorch使用 动态计算图。
  • 在PyTorch中,每个前向传递都定义一个新的计算图。而Tensorflow只定义一次然后就一遍遍执行!
  • 静态图的好处在于您可以预先优化图。

nn模块

  • 计算图和autograd是定义复杂运算符并自动采用导数的非常强大的范例。但是对于大型神经网络,原始的autograd+每层参数显示定义!可能会有点太低了。
  • 所以需要封装,nn模块为此而生!
# -*- coding: utf-8 -*-
import torch

# N是批尺寸大小; D_in 是输入维度;
# H 是隐藏层维度; D_out 是输出维度
N, D_in, H, D_out = 64, 1000, 100, 10

# 产生输入和输出的随机张量
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# 产生随机权重的张量
model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)
# nn包还包含常用的损失函数的定义
# 在这种情况下,我们将使用平均平方误差(MSE)作为我们的损失函数
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-4 # 这里按原来的1e-6就下降到贼慢!
for t in range(500):

    y_pred = model(x)

    # 计算并输出loss
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    model.zero_grad()
    loss.backward()

    # 用梯度下降更新权重
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

optim模块

  • optim抽象了优化算法的思想,并提供了常用优化算法的实现
# -*- coding: utf-8 -*-
import torch

# N是批大小;D是输入维度
# H是隐藏层维度;D_out是输出维度
N, D_in, H, D_out = 64, 1000, 100, 10

# 产生随机输入和输出张量
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# 使用nn包定义模型和损失函数
model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
    # 前向传播:通过像模型输入x计算预测的y
    y_pred = model(x)

    # 计算并输出loss
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    optimizer.zero_grad() # = model.zero_grad()

    # 反向传播:根据模型的参数计算loss的梯度
    loss.backward()

    # 调用Optimizer的step函数使它所有参数更新
    optimizer.step()

自定义nn

  • 两层FC网络
class TwoLayerNet(torch.nn.Module):
    def __init__(self, D_in, H, D_out):
        """
        在构造函数中,我们实例化了两个nn.Linear模块,并将它们作为成员变量。
        """
        super(TwoLayerNet, self).__init__()
        self.linear1 = torch.nn.Linear(D_in, H)
        self.linear2 = torch.nn.Linear(H, D_out)

    def forward(self, x):
        """
        在前向传播的函数中,我们接收一个输入的张量,也必须返回一个输出张量。
        我们可以使用构造函数中定义的模块以及张量上的任意的(可微分的)操作。
        """
        h_relu = self.linear1(x).clamp(min=0)
        y_pred = self.linear2(h_relu)
        return y_pred

model = TwoLayerNet(D_in, H, D_out)

控制流+权重共享

  • 一个完全连接的ReLU网络,该网络在每个前向传递中选择1到4之间的随机数作为隐藏层的层数,多次重复使用相同的权重计算最里面的隐藏层。
import torch, random

class DynamicNet(torch.nn.Module):
    def __init__(self, D_in, H, D_out):
        """
        在构造函数中,我们构造了三个nn.Linear实例,它们将在前向传播时被使用。
        """
        super(DynamicNet, self).__init__()
        self.input_linear = torch.nn.Linear(D_in, H)
        self.middle_linear = torch.nn.Linear(H, H)
        self.output_linear = torch.nn.Linear(H, D_out)

    def forward(self, x):
        """
        对于模型的前向传播,我们随机选择0、1、2、3,并重用了多次计算隐藏层的middle_linear模块。
        由于每个前向传播构建一个动态计算图,
        我们可以在定义模型的前向传播时使用常规Python控制流运算符,如循环或条件语句。
        在这里,我们还看到,在定义计算图形时多次重用同一个模块是完全安全的。
        这是Lua Torch的一大改进,因为Lua Torch中每个模块只能使用一次。
        """
        h_relu = self.input_linear(x).clamp(min=0)
        for _ in range(random.randint(0, 3)):
            h_relu = self.middle_linear(h_relu).clamp(min=0)
        y_pred = self.output_linear(h_relu)
        return y_pred
        
model = DynamicNet(D_in, H, D_out)

迁移学习

  • 两个常见使用方法:① 当作特征提取器 ② fine-tuning
  • 选了imagenet的一个小子集,只有蚂蚁和蜜蜂!
  • ImageFold要求数据如下组织:
    root/dog/xxx.png
    root/dog/xxy.png
    root/dog/xxz.png
    root/cat/123.png
    root/cat/nsdf3.png
    root/cat/asd932_.png
    

加载数据集

  • 使用 torchvision 和 torch.utils.data 包来加载数据
from __future__ import print_function, division
import torch, time, os, copy
import numpy as np
import torch.nn as nn
import torchvision as tv
import torch.optim as optim
import matplotlib.pyplot as plt
from torch import optim
from torchvision import datasets, models, transforms

def imshow(inp, title=None):
    """Imshow for Tensor."""
    inp = inp.numpy().transpose((1, 2, 0))
    mean = np.array([0.485, 0.456, 0.406])
    std = np.array([0.229, 0.224, 0.225])
    inp = std * inp + mean
    inp = np.clip(inp, 0, 1)
    plt.imshow(inp)
    if title is not None:
        plt.title(title)
    plt.savefig('test.png')
    plt.pause(0.001)  # pause a bit so that plots are updated

if __name__ == '__main__':
    plt.ion()
    # Augumentation & Normalization for training
    # Normalization for Validation
    data_transforms = {
        'train': transforms.Compose([
            transforms.RandomResizedCrop(224),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize([0.485,0.456,0.406], 
                [0.229, 0.224, 0.225])
        ]),
        'val': transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize([0.485,0.456,0.406], 
                [0.229, 0.224, 0.225])
        ])
    }
    data_dir = 'Dataset/hymenoptera_data/'
    img_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
        data_transforms[x]) for x in ['train', 'val']}
    dataloaders = {x: torch.utils.data.DataLoader(img_datasets[x], batch_size=4,
        shuffle=True, num_workers=4) for x in ['train', 'val']}
    dataset_size = {x: len(img_datasets[x]) for x in ['train', 'val']}
    cls_names = img_datasets['train'].classes
    device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')

    # Get a batch of training data
    inputs, classes = next(iter(dataloaders['train']))
    # Make a grid from batch
    out = tv.utils.make_grid(inputs) # 将若干幅图像拼成一幅图像
    imshow(out, title=[cls_names[x] for x in classes])

模型预测的可视化

  • 显示少量预测图像的通用函数

训练模型

  • 调整学习率 // 保存最好的模型
    model_ft = models.resnet18(pretrained=True)
    num_ftrs = model_ft.fc.in_features
    model_ft.fc = nn.Linear(num_ftrs, 2)
    model_ft = model_ft.to(device)
    criterion = nn.CrossEntropyLoss()
    optim_ft = optim.SGD(model_ft.parameters(), lr=1e-3, momentum=0.9)
    # Decay LR by 0.1 every 7 epochs
    exp_lr_scheduler = lr_scheduler.StepLR(optim_ft, step_size=7, gamma=0.1)
    # Train
    model_ft = train_model(dataloaders, model_ft, criterion, optim_ft, 
        exp_lr_scheduler, epochs=25)

学习率调整总结

  • 直接修改optimizer中的lr参数;
    if epoch % 5 == 0:
        for p in optimizer.param_groups:
            p['lr'] *= 0.9
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])
    

在这里插入图片描述

  • 利用lr_scheduler()提供的几种衰减函数
  • LambdaLR:torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)
lambda1 = lambda epoch:np.sin(epoch) / epoch
scheduler = lr_scheduler.LambdaLR(optimizer,lr_lambda = lambda1)
for epoch in range(100):
    scheduler.step()
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])

在这里插入图片描述

  • StepLR:torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)
scheduler = lr_scheduler.StepLR(optimizer,step_size=5,gamma = 0.8)
for epoch in range(100):
    scheduler.step()
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])
plt.plot(range(100),lr_list,color = 'r')

在这里插入图片描述

  • MultiStepLR:torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1)
optimizer = Adam(model.parameters(),lr = LR)
scheduler = lr_scheduler.MultiStepLR(optimizer,milestones=[20,80],gamma = 0.9)
for epoch in range(100):
    scheduler.step()
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])

在这里插入图片描述

  • ExponentialLR:torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1)
lr_list = []
model = net()
LR = 0.01
optimizer = Adam(model.parameters(),lr = LR)
scheduler = lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
for epoch in range(100):
    scheduler.step()
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])
plt.plot(range(100),lr_list,color = 'r')

在这里插入图片描述

  • CosineAnnealingLR:torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1)
    • T_max 对应1/2个cos周期所对应的epoch数值
lr_list = []
model = net()
LR = 0.01
optimizer = Adam(model.parameters(),lr = LR)
scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max = 20)
for epoch in range(100):
    scheduler.step()
    lr_list.append(optimizer.state_dict()['param_groups'][0]['lr'])
plt.plot(range(100),lr_list,color = 'r')

在这里插入图片描述

  • ReduceLROnPlateau:torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, verbose=False, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)
    • 在发现loss不再降低或者acc不再提高之后,降低学习率
    • mode:'min’模式检测metric是否不再减小,'max’模式检测metric是否不再增大;
    • factor: 触发条件后lr*=factor;
    • patience:不再减小(或增大)的累计次数;
    • verbose:触发条件后print;
    • threshold:只关注超过阈值的显著变化;
    • threshold_mode:有rel和abs两种阈值计算模式,rel规则:max模式下如果超过best(1+threshold)为显著,min模式下如果低于best(1-threshold)为显著;abs规则:max模式下如果超过best+threshold为显著,min模式下如果低于best-threshold为显著;
    • cooldown:触发一次条件后,等待一定epoch再进行检测,避免lr下降过速;
    • min_lr:最小的允许lr;
    • eps:如果新旧lr之间的差异小与1e-8,则忽略此次更新。

将CNN固定为特征提取器

  • 在这里,我们需要冻结除最后一层之外的所有网络。我们需要设置requires_grad == False来冻结参数,以便在backward()中不会计算梯度
model_conv = models.resnet18(pretrained=True)
for param in model_conv.parameters():
    param.requires_grad = False
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
optim_conv = optim.SGD(model_conv.fc.parameters(), lr=1e-3, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optim_conv, step_size=7, gamma=0.1)
# Train
model_conv = train_model(dataloaders, model_conv, criterion, optim_conv, 
    exp_lr_scheduler, epochs=25)

可视化Tensorboard

参考链接

可视化 Visdom

参考链接

Save/Load模型

  • 保存/加载state_dict(推荐)
torch.save(net.state_dict(), 'test.pth') 
net = Net()
net.load_state_dict(torch.load('test.pth'))
  • 整个模型的保存和加载(试了一个会报错…不知道什么情况)

state_dict ?

  • 在PyTorch中,模型的可学习参数(即权重和偏差)
  • torch.nn.Module 包含在模型的参数中 (通过访问model.parameters())
  • state_dict是一个简单的Python字典对象,每个层映射到其参数张量:只有具有可学习参数的层(卷积层,线性层等)和已注册的缓冲区(batchnorm的running_mean,不带学习参数的层!)才在模型的state_dict中具有条目 model.state_dict()
  • torch.optim也有一个state_dict,其中包含有关优化器状态以及所用超参数的信息
  • state_dict 对象是Python词典,因此可以轻松地保存,更新,更改和还原它们,从而为PyTorch模型和优化器增加了很多模块化 optimizer.state_dict()

更多SL

  • 参考链接
  • 保存和加载用于推理和/或继续训练的常规检查点
  • 将多个模型保存在一个文件中
  • 使用来自不同模型的参数进行热启动模型
  • 跨设备保存和加载模型(GPU/CPU)
  • 保存torch.nn.DataParallel模型

模块的作用 (nn)

MNIST数据安装

  • pathlib来处理文件路径的相关操作(Python3标准库)
  • request来下载数据集
  • pickle是一个用来把数据序列化为python特定格式的库
  • gzip用于解压
  • PS:torch的数据类型转换,① 在Tensor后加long(), int(), double(),float(),byte()等函数就能将Tensor进行类型转换 ② data.type(torch.FloatTensor) ③ 还有data.type_as(data2)data.cuda() / data.cpu()data.numpy() / torch.from_numpy(data)
  • PS:torch和numpy的乘法:
    • torch:* broadcast + elementwise / torch.mul = * / torch.mm 矩阵乘法 / torch.matmul torch.mm的broadcast版本
    • numpy:multply() 元素相乘;dot()、matmul()、@ 矩阵乘法;*是特别的:在数组操作中,作为元素相乘;在矩阵操作中作为矩阵相乘
  • PS:CrossEntropyLoss = Softmax - log - NLLLoss;参考 - 链接
    • NLLLoss的结果就是把上面的输出与Label对应的那个值拿出来,再去掉负号,再求均值
  • 流程:
    • 从全部数据中选择一小批数据(大小为bs)
    • 使用模型进行预测
    • 计算当前预测的损失值
    • 使用loss.backward()更新模型中的梯度,在这个例子中,更新的是weights和bias

Python写的Softmax回归

from pathlib import Path
import numpy as np
from matplotlib import pyplot
import requests
import pickle, gzip
import torch, math
from IPython.core.debugger import set_trace

def log_softmax(x):
    return x - x.exp().sum(-1).log().unsqueeze(-1)

def model(xb, weights, bias):
    return log_softmax(xb @ weights + bias)

# Negative log-likelihood
def nll(input, target):
    return -input[range(target.shape[0]), target].mean() # 取input相关元素的均值

def accuracy(out, yb):
    preds = torch.argmax(out, dim=1)
    return (preds == yb).float().mean()

if __name__ == '__main__':
    # 下载
    DATA_PATH = Path('Dataset')
    PATH = DATA_PATH / 'minist'
    FILENAME = "mnist.pkl.gz"

    # PATH.mkdir(parents=True, exist_ok=True)
    # URL = "http://deeplearning.net/data/mnist/"
    # if not (PATH / FILENAME).exists():
    #     content = requests.get(URL + FILENAME).content
    #     (PATH / FILENAME).open('wb').write(content)

    with gzip.open((PATH / FILENAME).as_posix(), 'rb') as f:
        ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
    pyplot.imshow(x_train[0].reshape((28,28)), cmap='gray')
    print(x_train.shape)
    
    x_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid))
    n, c = x_train.shape
    # Initialization
    weights = torch.randn(784, 10) / math.sqrt(784)
    weights.requires_grad_()
    bias = torch.zeros(10, requires_grad=True)
    # loss
    loss_func = nll
    # Train
    lr = 0.5
    epochs = 2
    bs = 64
    batches = (n - 1) // bs + 1
    for epoch in range(epochs):
        for i in range(batches):
            # set_trace()
            start_i = i * bs
            end_i = start_i + bs
            xb = x_train[start_i:end_i]
            yb = y_train[start_i:end_i]
            pred = model(xb, weights, bias)
            print('{} / {}: {}'.format(i, batches, accuracy(pred, yb)))
            loss = loss_func(pred, yb)
            loss.backward()
            with torch.no_grad():
                weights -= weights.grad * lr
                bias -= bias.grad * lr
                weights.grad.zero_()
                bias.grad.zero_()

加上各种模块进行重构

  • nn.functional > nn.Module > nn.Linear > optim > Dataset > DataLoader
  • validation
  • 编写fit()get_data()函数
  • nn.Linear
  • CNN
  • nn.Sequential()
  • DataLoader
  • GPU
  • TODO:数据扩充,超参数调整,训练监控,迁移学习等

functional

  • 替换,使用cross_entropy替换掉softmax - log - nll_loss
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
    return xb @ weights + bias

Module

  • nn.Model和nn.Parameter来完成一个更加清晰简洁的训练循环
  • 训练循环封装进fit函数中,这样就能在后面再次运行。
class MNIST_Logistic(nn.Module):
    
    def __init__(self):
        super().__init__()
        self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
        self.bias = nn.Parameter(torch.zeros(10))
    
    def forward(self, xb):
        return xb @ self.weights + self.bias

def fit(model, vis_flag=True):
    for epoch in range(epochs):
        for i in range(batches):
            # set_trace()
            start_i = i * bs
            end_i = start_i + bs
            xb = x_train[start_i:end_i]
            yb = y_train[start_i:end_i]
            pred = model(xb)
            if vis_flag:
                print('{} / {}: {}'.format(i, batches, accuracy(pred, yb)))
            loss = loss_func(pred, yb)
            loss.backward()
            with torch.no_grad():
                for p in model.parameters():
                    p -= p.grad * lr
                model.zero_grad()

model = MNIST_Logistic()
fit(model)

Linear层重构

  • 层Linear进行重构,可以有效管理层变量
from torch import nn
class MNIST_Logistic(nn.Module):
    
    def __init__(self):
        super().__init__()
        self.lin = nn.Linear(784, 10)
    
    def forward(self, xb):
        return self.lin(xb)

optim进行重构

  • 可以使用优化器中的step方法执行前向传播过程中的步骤来替换手动更新每个参数
from torch import optim

def get_model(lr=0.5):
    model = MNIST_Logistic()
    return model, optim.SGD(model.parameters(), lr=lr)

def fit(model, opt, epochs=2, batches=20, bs=64, vis_flag=True):
    for epoch in range(epochs):
        for i in range(batches):
            # set_trace()
            start_i = i * bs
            end_i = start_i + bs
            xb = x_train[start_i:end_i]
            yb = y_train[start_i:end_i]
            pred = model(xb)
            if vis_flag:
                print('{} / {}: {}'.format(i, batches, accuracy(pred, yb)))
            loss = loss_func(pred, yb)
            loss.backward()
            opt.step()
            opt.zero_grad()
            
model, opt = get_model()
fit(model, opt, 2, (n - 1) // 64 + 1, 64)

Dataset重构

  • Pytorch包含一个Dataset抽象类
  • Dataset 始终包含一个__len__函数(通过Python中的标准函数len调用);一个用来索引到内容中的__getitem__函数
  • PyTorch中的TensorDataset是一个封装了张量的Dataset
from torch.utils.data import TensorDataset
train_ds = TensorDataset(x_train, y_train)
xb,yb = train_ds[i*bs : i*bs+bs]

DataLoader重构

  • PyTorch的DataLoader负责批量数据管理;
  • 可以使用任意的Dataset创建一个DataLoader
  • DataLoader使得对批量数据的迭代更容易:DataLoader自动的为我们提供每一小批量的数据来代替切片的方式train_ds[i*bs : i*bs+bs]
from torch.utils.data import DataLoader
def fit(model, opt, epochs=2, vis_flag=True):
    loss_func = F.cross_entropy
    train_ds = TensorDataset(x_train, y_train)
    train_dl = DataLoader(train_ds, batch_size=bs)
    for epoch in range(epochs):
        for xb, yb in train_dl:
            # set_trace()
            pred = model(xb)
            if vis_flag:
                print('accu: {}'.format(accuracy(pred, yb)))
            loss = loss_func(pred, yb)
            loss.backward()
            opt.step()
            opt.zero_grad()

增加验证集Validation

  • 打乱训练数据的顺序通常是避免不同批数据中存在相关性和过拟合的重要步骤!
  • 无论是否打乱顺序计算出的验证集损失值都是一样的!
  • 鉴于打乱顺序还会消耗额外的时间,所以打乱验证集数据是没有任何意义的!
def fit(model, opt, epochs=5, vis_flag=True, val_flag=True):
    loss_func = F.cross_entropy
    train_ds = TensorDataset(x_train, y_train)
    train_dl = DataLoader(train_ds, batch_size=64, shuffle=True)
    valid_ds = TensorDataset(x_valid, y_valid)
    valid_dl = DataLoader(valid_ds, batch_size=128)
    for epoch in range(epochs):
        for xb, yb in train_dl:
            # set_trace()
            pred = model(xb)
            loss = loss_func(pred, yb)
            loss.backward()
            opt.step()
            opt.zero_grad()
        model.eval()
        with torch.no_grad():
            valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
        if (vis_flag):
            print(epoch, (valid_loss / len(valid_dl)).item())

进一步简化代码

  • 获取数据 - 拟合模型
def loss_batch(model, loss_func, xb, yb, opt=None):
    loss = loss_func(model(xb), yb)
    if opt is not None:
        loss.backward()
        opt.step()
        opt.zero_grad()
    return loss.item(), len(xb)

def get_data(train_ds, valid_ds, bs):
    return (
        DataLoader(train_ds, batch_size=bs, shuffle=True),
        DataLoader(valid_ds, batch_size=bs*2),
    )

def fit(model, opt, loss_func, train_dl, valid_dl, epochs=5, vis_flag=True, val_flag=True):
    for epoch in range(epochs):
        for xb, yb in train_dl:
            # set_trace()
            pred = model(xb)
            loss = loss_func(pred, yb)
            loss.backward()
            opt.step()
            opt.zero_grad()
        model.eval()
        with torch.no_grad():
            valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
        if (vis_flag):
            print(epoch, (valid_loss / len(valid_dl)).item())

train_ds = TensorDataset(x_train, y_train)
valid_ds = TensorDataset(x_valid, y_valid)
loss_func = F.cross_entropy
train_dl, valid_dl = get_data(train_ds, valid_ds, 64)
model, opt = get_model()
fit(model, opt, loss_func, train_dl, valid_dl)

推广到CNN

  • 卷积公式再强调下:
    ( X + 2 P − D ( K − 1 ) − 1 ) / S + 1 (X+2P-D(K-1)-1) / S +1 (X+2PD(K1)1)/S+1
  • 均值池化之前是 10 × 4 × 4 10×4×4 10×4×4,池化之后就得到了 10 × 1 10×1 10×1,加上批尺寸是64,所以就是 64 × 10 64×10 64×10
class MNIST_CNN(nn.Module):

    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
        self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
        self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)

    def forward(self, xb):
        xb = xb.view(-1, 1, 28, 28)
        xb = F.relu(self.conv1(xb))
        xb = F.relu(self.conv2(xb))
        xb = F.relu(self.conv3(xb))
        xb = F.avg_pool2d(xb, 4)
        return xb.view(-1, xb.size(1))

def get_model(lr=0.1):
    #model = MNIST_Logistic()
    model = MNIST_CNN()
    return model, optim.SGD(model.parameters(), lr=lr, momentum=0.9)
Sequential
  • 使用Sequential模型简化CNN网络的定义!
  • 自定义层:比如PyTorch中没有view层,我们需要为我们的网络定义一个,Lambda函数将会创建一个层!
class Lambda(nn.Module):

    def __init__(self, func):
        super().__init__()
        self.func = func
        
    def forward(self, x):
        return self.func(x)

def get_model(lr=0.1):
    #model = MNIST_Logistic()
    # model = MNIST_CNN()
    model = nn.Sequential(
        Lambda(lambda x: x.view(-1, 1, 28, 28)),
        nn.Conv2d(1,16,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.Conv2d(16,16,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.Conv2d(16,10,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.AvgPool2d(4),
        Lambda(lambda x: x.view(x.size(0), -1)),
    )
    return model, optim.SGD(model.parameters(), lr=lr, momentum=0.9)

封装DataLoader

  • 因为没有采用GAP,所以只能处理规定大小的图像,并且输出需要确定为 b a t c h S i z e × 1 × 1 batchSize×1×1 batchSize×1×1
  • 所以将数据预处理模块提出来,单独封装好DataLoader!
def preprocess(x, y):
    return x.view(-1, 1, 28, 28), y

class WrappedDataLoader:
    
    def __init__(self, dl, func):
        self.dl = dl
        self.func = func
    
    def __len__(self):
        return len(self.dl)

    def __iter__(self):
        batches = iter(self.dl)
        for b in batches:
            yield (self.func(*b))

def get_model(lr=0.1):
    #model = MNIST_Logistic()
    # model = MNIST_CNN()
    model = nn.Sequential(
        nn.Conv2d(1,16,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.Conv2d(16,16,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.Conv2d(16,10,kernel_size=3,stride=2,padding=1),
        nn.ReLU(),
        nn.AvgPool2d(4),
        Lambda(lambda x: x.view(x.size(0), -1)),
    )
    return model, optim.SGD(model.parameters(), lr=lr, momentum=0.9)

train_ds = TensorDataset(x_train, y_train)
valid_ds = TensorDataset(x_valid, y_valid)
train_dl, valid_dl = get_data(train_ds, valid_ds, 64)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)

使用GPU

  • print(torch.cuda.is_available()) >>> True
  • 1 数据移到GPU,模型移到GPU!
dev = torch.device(
    "cuda") if torch.cuda.is_available() else torch.device("cpu")

def preprocess(x, y):
    return x.view(-1, 1, 28, 28).to(dev), y.to(dev)

model.to(dev)

总结

在这里插入图片描述

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值