Deeplizard

Pytorch(Deeplizard)

1. PyTorch and Tensor

1.1 介绍PyTorch

1.1.1 PyTorch介绍—深度学习神经网络API
  1. torch

    张量的有关运算。如创建、索引、连接、转置、加减乘除、切片等。

  2. torch.nn

    包含搭建神经网络层的模块(Modules)和一系列loss函数。如全连接、卷积、BN批处理、dropout、CrossEntryLoss、MSELoss等。

  3. torch.nn.functional

    常用的激活函数relu、leaky_relu、sigmoid等。

  4. torch.autograd

    提供Tensor所有操作的自动求导方法。

  5. torch.optim

    各种参数优化方法,例如SGD、AdaGrad、Adam、RMSProp等

  6. torch.utils.data

    用于加载数据。

  7. torchvision包
    • torchvision是PyTorch中专门用来处理图像的库,这个包中常用的几个模块:

    • torchvision.datasets:是用来进行数据加载的

      • torchvision.models:为我们提供了已经训练好的模型,让我们可以加载之后,直接使用。包括AlexNet、VGG、ResNet

      • torchvision.transforms:为我们提供了一般的图像转换操作类。

      • torchvision.utils:将给定的Tensor保存成image文件。

1.1.2 安装PyTorch—快速而又简单

镜像源安装

pip install torch==1.13.1 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install torchvision==0.14.1 -i https://pypi.tuna.tsinghua.edu.cn/simple

本地安装

pip install 文件地址
# 例如:pip install G:\DeepLearning\torch-1.13.1+cu116-cp39-cp39-win_amd64.whl
#	   pip install G:\DeepLearning\torchvision-0.14.1+cu116-cp39-cp39-win_amd64.whl
1.2.3 CUDA介绍—为什么深度学习使用GPU
  • cuda的下载及安装

    判断cuda版本:打开终端(win+R输入cmd)->输入nvidia-smi->查看CUDA Version->输入nvcc -V(查看是否安装CUDA)

    注意显示的是最高支持12.1版本

在这里插入图片描述

cuda下载选择合适的版本(11.6)因为上面安装的pytorch是cu116

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

建议保持默认,环境变量:

CUDA_PATH C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6
CUDA_PATH_V11_6 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\lib
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\libnvvp

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

下载好后解压,并将解压后的bin、include、lib三个目录文件复制到C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6该目录下,注意是添加。

  • 默认在CPU上使用张量
t = torch.tensor([1,2,3])
print(t)
# 结果:tensor([1,2,3])
  • 在GPU上使用张量计算(使用CUDA)
t = t.cuda()	# 相当于torch.tensor([1,2,3]).cuda()
print(t)
# 结果:tensor([1,2,3],device='cuda:0')

选择性的在GPU上或CPU上运行计算,但是为什么不直接在GPU上运行每一个计算呢,GPU不是比CPU更快吗?

因为GPU对于特定的任务来说速度更快,遇到的一个问题是性能较慢的瓶颈,将数据从CPU转移到GPU上的代价更高昂,如果这样做,整体性能可能会比较慢,计算任务相对简单是,将相对较小的计算任务转移到GPU,不仅不会加速,反而可能会很慢。GPI对于那些可以被分解成许多小人物的任务来说,效果很好,但是如果计算任务已经很小(不可分解),把它转移到GPU上会使性能很慢。综上所述,简单的任务使用CPU,但是对于更大,更复杂的问题使用GPU会是更好的选择。

1.2.Tensors(张量)

1.2.1 张量介绍—深度学习数据结构
  • number,array,2d-array计算机科学中常用术语
  • scalar(标量),vector(向量),matrix(矩阵)数学中常用术语

    在这里插入图片描述

  • 张量的秩:张量中存在的维数(张量轴的数量)

    a = [1,2,3,4]
    print(a[2])	# 需要1个索引
    # 结果:3
    dd = [
        [1,2,3],
        [4,5,6],
        [7,8,9]
    ]
    print(dd[0][2])	# 需要2个索引
    # 结果:3
    
    已知第一个轴的长度为3,第二个轴的长度为2
    # Axis=1
    t[0]
    t[1]
    t[2]
    
    #Axis=2
    t[0][0]
    t[1][0]
    t[2][0]
    
    t[0][1]
    t[1][1]
    t[2][1]
    
    t[0][2]
    t[1][2]
    t[2][2]
    
    t[0][3]
    t[1][3]
    t[2][3]
    
    dd = [
        [1,2,3],
        [4,5,6],
        [7,8,9]
    ]
    print(dd[0])
    # 结果:[1,2,3]
    print(dd[1])
    # 结果:[4,5,6]
    print(dd[2])
    # 结果:[7,8,9]
    print(dd[0][0])
    # 结果:1
    print(dd[1][0])
    # 结果:4
    print(dd[2][0])
    # 结果:7
    ...
    t = torch.tensor(dd)
    print(t)
    # 结果: tensor([[1,2,3],
    #				[4,5,6],
    # 				[7,8,9]])
    print(type(t))
    # 结果: torch.tensor
    print(t.shape)
    # 结果: torch.Size([3,3])
    
    print(t.reshape(1,9))
    # 结果:tensor([[1,2,3,4,5,6,7,8,9]])
    print(t.reshape(1,9).shape)
    # 结果:torch.Size([1,9])
    
1.2.2 PyTorch Tensors 介绍—神经网络编程

CNN的张量[B,C,H,W]

B:批量大小(一个批量的样本数量)

C:通道数量RGB三个

H:图像的高度

W:图像的宽度

import torch
import numpy as np

t = torch.Tensor()
print(type(t))
# 结果: <class 'torch.Tensor'>

print(t.dtype)
#结果:torch.float32
print(t.device)
#结果:cpu
print(t.layout)	#张量的布局
#结果:torch.strided

device = torch.device('cuda:0')
print(device)
# 结果:cuda:0

print(t1.dtype)
# 结果:torch.int64
print(t2.dtype)
# 结果:torch.float32

# 不同设备上进行运算
t1 = torch.tensor([1,2,3])
t2 = t1.cuda()

print(t1.device)
print(t2.device)
# 结果:cpu
#	   cuda:0

print(t1+t2)
# 报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!,两个值在不同的设备上 

# numpy数组转化为tensor类型
data = np.array([1,2,3])

print(type(data))
print(torch.Tensor(data))	# 构造函数
print(torch.tensor(data))	# 构造工厂
print(torch.as_tensor(data))
print(torch.from_numpy(data))
# 结果:<class 'numpy.ndarray'>
# 结果:tensor([1., 2., 3.])
# 结果:tensor([1, 2, 3], dtype=torch.int32)
# 结果:tensor([1, 2, 3], dtype=torch.int32)
# 结果:tensor([1, 2, 3], dtype=torch.int32)

print(torch.eye(2)) # 单位矩阵,指定行的数量
print(torch.zeros(2,2))
print(torch.ones(2,2))
print(torch.rand(2,2))
# 结果:tensor([[1., 0.],
#         	[0., 1.]])
# 	   tensor([[0., 0.],
#         	[0., 0.]])
# 	   tensor([[1., 1.],
#         	[1., 1.]])
# 	   tensor([[0.7282, 0.0049],
#         	[0.4634, 0.3512]])
1.2.3 创建PyTorch Tensors—最好选择
import torch
import numpy as np

# 关于数据转换为Pytorch张量的方法之间的主要区别
data = np.array([1,2,3])

t1 = torch.Tensor(data)	# 张量类构造器

# 工厂函数
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)

# 上述四个方法之间的主要区别,以及在不同情况下最好的选择
print(t1)
print(t2)
print(t3)
print(t4)
# 结果:tensor([1., 2., 3.])
# 结果:tensor([1, 2, 3], dtype=torch.int32)
# 结果:tensor([1, 2, 3], dtype=torch.int32)
# 结果:tensor([1, 2, 3], dtype=torch.int32)
print(t1.dtype)
print(t2.dtype)
print(t3.dtype)
print(t4.dtype)
# 结果: torch.float32
#		torch.int32
#		torch.int32
#		torch.int32

data[0] = 0
data[1] = 0
data[2] = 0
print(t1)
print(t2)
# 结果:tensor([1., 2., 3.])
# 结果:tensor([1, 2, 3], dtype=torch.int32)
print(t3)
print(t4)
# 结果:tensor([0, 0, 0], dtype=torch.int32)
# 结果:tensor([0, 0, 0], dtype=torch.int32)
# 以上两种结果是因为分配内存的方式不同而造成的,前两项在内存中创建一个额外的输入数据副本(copy),后两项是用数字数组在内存中共享数据(share)

print(torch.get_default_dtype())
# 结果:torch.float32

print(torch.tensor(np.array([1,2,3])))
# 结果:tensor([1, 2, 3], dtype=torch.int32)
print(torch.tensor(np.array([1.,2.,3.])))
# 结果:tensor([1., 2., 3.], dtype=torch.float64)
print(torch.tensor(np.array([1,2,3]),dtype=torch.float64))
# 结果:tensor([1., 2., 3.], dtype=torch.float64)

在这里插入图片描述

numpy数组和Pytorch张量之间切换可以非常快,因为创建新的Pytorch张量时数据是共享的,而不是在后台复制。

共享:是指在内存中具有相同的地址,所以底层数据发生变化会反映到两个对象当中,共享数据比复制数据更有效,使用更少的内存。

选择最好的转换方法: torch.tensor()

对性能进行调优我们需要第二种方法:torch.as_tensor(),由于torch.as_tensor()可以接受任何想Python数据结构这样的数组,虽然torch.from_numpy()调用只可以接受numpy数组,但其中任何一个都可以使用。

1.2.4 PyTorch Tensors—重塑操作
import torch

t = torch.tensor([
    [1,1,1,1],
    [2,2,2,2],
    [3,3,3,3]
],dtype=torch.float32)

print(t.size())
# 结果:torch.Size([3, 4])
print(t.shape)
# 结果:torch.Size([3, 4])
print(len(t.shape)) # 查看张量的秩
# 结果: 2


# 将形状转换为张量,然后使用.prod看到这个张量包含12个分量
print(torch.tensor(t.shape).prod())
# 结果:tensor(12)

# 查看张量中的元素数量
print(t.numel())
# 结果:12

# 不改变秩的情况下对t进行重塑
print(t.reshape(1,12))
# 结果:tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]])
print(t.reshape(2,6))
# 结果:tensor([[1., 1., 1., 1., 2., 2.],
#         [2., 2., 3., 3., 3., 3.]])
print(t.reshape(3,4))
# 结果:tensor([[1., 1., 1., 1.],
#         [2., 2., 2., 2.],
#         [3., 3., 3., 3.]])
print(t.reshape(4,3))
# 结果:tensor([[1., 1., 1.],
#         [1., 2., 2.],
#         [2., 2., 3.],
#         [3., 3., 3.]])
print(t.reshape(6,2))
# 结果:tensor([[1., 1.],
#         [1., 1.],
#         [2., 2.],
#         [2., 2.],
#         [3., 3.],
#         [3., 3.]])
print(t.reshape(12,1 ))
# 结果:tensor([[1.],
#         [1.],
#         [1.],
#         [1.],
#         [2.],
#         [2.],
#         [2.],
#         [2.],
#         [3.],
#         [3.],
#         [3.],
#         [3.]])

# Original
print(t.reshape(1,12))
print(t.reshape(1,12).shape)
# 结果:tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]])
# torch.Size([1, 12])

# 压缩和解压允许扩大和缩小张量的秩
# Squeezed:压缩一个张量可以移除所有长度为1的轴
print(t.reshape(1,12).squeeze())
print(t.reshape(1,12).squeeze().shape)
# 结果:tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
# torch.Size([12])

# Unsqueezed:解压一个张量则会增加一个长度为1的维度
print(t.reshape(1,12).squeeze().unsqueeze(dim=0))
print(t.reshape(1,12).squeeze().unsqueeze(dim=0).shape)
# 结果:tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]])
# torch.Size([1, 12])

Flatten展平层处理张量意味着去除所有的轴,只保留一个,创造了一个单轴张量,包含了张量的所有元素。

Flatten的操作是当从一个卷积层过渡到一个全连接层时必须在神经网络中发生的操作。

def flatten(t):
    t = t.reshape(1,-1) # -1根据一个张量中包含的其他元素值和元素的个数来求出值应该是多少,即-1未定,根据前面的那个数字变化而定
    t = t.squeeze()
    return t

print(flatten(t))
# 结果:tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
print(t.reshape(1,12))
# 结果:tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]])
# 将t1,t2,t3看作三张4x4的图片,创建一个批次来传入CNN
t1 = torch.tensor([
    [1,1,1,1],
    [1,1,1,1],
    [1,1,1,1],
    [1,1,1,1]
])
t2 = torch.tensor([
    [2,2,2,2],
    [2,2,2,2],
    [2,2,2,2],
    [2,2,2,2]
])
t3 = torch.tensor([
    [3,3,3,3],
    [3,3,3,3],
    [3,3,3,3],
    [3,3,3,3]
])

# 将t1,t2,t3合成一个更大的张量,该张量有三个轴
t = torch.stack((t1,t2,t3))
print(t.shape)
# 结果:torch.Size([3, 4, 4]),3表示batch_size大小,即一个batch_size中的样本数量,对于CNN可以接受的张量,还少一个彩色通道的轴
print(t)
# 结果:tensor([[[1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1]],
# 
#         [[2, 2, 2, 2],
#          [2, 2, 2, 2],
#          [2, 2, 2, 2],
#          [2, 2, 2, 2]],
#
#         [[3, 3, 3, 3],
#          [3, 3, 3, 3],
#          [3, 3, 3, 3],
#          [3, 3, 3, 3]]])

# 为张量添加一个彩色通道的轴,即以下的1
t = t.reshape(3,1,4,4)
print(t)
# 结果:tensor([[[[1, 1, 1, 1],
#           [1, 1, 1, 1],
#           [1, 1, 1, 1],
#           [1, 1, 1, 1]]],
# 
# 
#         [[[2, 2, 2, 2],
#           [2, 2, 2, 2],
#           [2, 2, 2, 2],
#           [2, 2, 2, 2]]],
# 
# 
#         [[[3, 3, 3, 3],
#           [3, 3, 3, 3],
#           [3, 3, 3, 3],
#           [3, 3, 3, 3]]]])

print(t[0])
# 结果:tensor([[[1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1],
#          [1, 1, 1, 1]]])
print(t[0][0])
# 结果:tensor([[1, 1, 1, 1],
#         [1, 1, 1, 1],
#         [1, 1, 1, 1],
#         [1, 1, 1, 1]])
print(t[0][0][0])
# 结果:tensor([1, 1, 1, 1])
print(t[0][0][0][0])
# 结果:tensor(1)

# 调用中指定start_dim参数:flatten操作时开始的轴,这里的1是一个索引,即是第二个轴(彩色通道轴),跳过了批轴
print(t.flatten(start_dim=1).shape)
# 结果:torch.Size([3, 16])
print(t.flatten(start_dim=1))
# 结果:tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
#         [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
#         [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]])
1.2.5 PyTorch Tensors—元素操作

元素运算是对张量元素的运算,这些张量元素在张量中对应或有相同的索引位置;元素的操作是在对应元素上进行的,即两个元素在张量中占据相同的位置;张量中的位置是由用来定位每个元素的索引决定的。

两个张量必须具有相同的形状才能执行元素操作。具有相同的形状:张量在每一个对应的轴上有相同数量的轴,并且他们的长度相同。

import  torch
import numpy as np

t1 = torch.tensor([
    [1,2],
    [3,4]
],dtype=torch.float32)

t2 = torch.tensor([
    [9,8],
    [7,6]
],dtype=torch.float32)

# 第一个轴是数组,第二个轴是数字
print(t1[0])
# 结果:tensor([1., 2.])
print(t1[0][0])
# 结果:tensor(1.)
# 相同位置的索引
print(t1[0][0])
# 结果:tensor(1.)
print(t2[0][0])
# 结果:tensor(9.)
# 加减操作
print(t1+t2)
# 结果:tensor([[10., 10.],
#         [10., 10.]])
print(t1+2)
# 结果:tensor([[3., 4.],
#         [5., 6.]])
print(t1.add(2))
# 结果:tensor([[3., 4.],
#         [5., 6.]])
print(t1-2)
# 结果:tensor([[-1.,  0.],
#         [ 1.,  2.]])
print(t1.sub(2))
# 结果:tensor([[-1.,  0.],
#         [ 1.,  2.]])
# 乘除操作
print(t1*2)
# 结果:tensor([[2., 4.],
#         [6., 8.]])
print(t1.mul(2))
# 结果:tensor([[2., 4.],
#         [6., 8.]])
print(t1/2)
# 结果:tensor([[0.5000, 1.0000],
#         [1.5000, 2.0000]])
print(t1.div(2))
# 结果:tensor([[0.5000, 1.0000],
#         [1.5000, 2.0000]])

张量广播:定义了在元素操作过程中如何处理不同形状的张量

# 将2广播成t1的形状
print(np.broadcast_to(2,t1.shape))
# 结果:[[2 2]
#  [2 2]]
print(t1+2)	# 这里默认对2进行了广播,将2变为了与t1具有相同的形状
# 结果:tensor([[3., 4.],
#         [5., 6.]])
print(t1+torch.tensor(
    np.broadcast_to(2,t1.shape),dtype=torch.float32
))
# 结果:tensor([[3., 4.],
#         [5., 6.]])
t1 = torch.tensor([
    [1,1],
    [1,1]
],dtype=torch.float32)

t2 = torch.tensor([2,4],dtype=torch.float32)

print(np.broadcast_to(t2.numpy(),t1.shape))
# 结果:[[2. 4.]
#  [2. 4.]]

print(t1+t2)	# 对t2进行广播,广播成为[[2,4],[2,4]]
# 结果:tensor([[3., 5.],
#         [3., 5.]])
# 比较
t = torch.tensor([
    [0,5,7],
    [6,0,7],
    [0,8,0]
],dtype=torch.float32)

# 元素等于0
print(t.eq(0))
# 结果:tensor([[ True, False, False],
#         [False,  True, False],
#         [ True, False,  True]])

# 元素大于或等于0
print(t.ge(0))
# 结果:tensor([[True, True, True],
#         [True, True, True],
#         [True, True, True]])

# 元素大于0
print(t.gt(0))
# 结果:tensor([[False,  True,  True],
#         [ True, False,  True],
#         [False,  True, False]])

# 元素小于0
print(t.lt(0))
# 结果:tensor([[False, False, False],
#         [False, False, False],
#         [False, False, False]])

# 元素小于或等于7
print(t.le(7))
# 结果:tensor([[ True,  True,  True],
#         [ True,  True,  True],
#         [ True, False,  True]])
print(t <= torch.tensor(np.broadcast_to(7,t.shape),dtype=torch.float32))
# 结果:tensor([[ True,  True,  True],
#         [ True,  True,  True],
#         [ True, False,  True]])
print(t <= torch.tensor([
    [7,7,7],
    [7,7,7],
    [7,7,7]
],dtype=torch.float32))
# 结果:tensor([[ True,  True,  True],
#         [ True,  True,  True],
#         [ True, False,  True]])
# 求绝对值
print(t.abs())
# 结果:tensor([[0., 5., 7.],
#         [6., 0., 7.],
#         [0., 8., 0.]])

# 开根号
print(t.sqrt())
# 结果:tensor([[0.0000, 2.2361, 2.6458],
#         [2.4495, 0.0000, 2.6458],
#         [0.0000, 2.8284, 0.0000]])

# 求负数
print(t.neg())
# 结果:tensor([[-0., -5., -7.],
#         [-6., -0., -7.],
#         [-0., -8., -0.]])

# 负数的绝对值
print(t.neg().abs())
# 结果:tensor([[0., 5., 7.],
#         [6., 0., 7.],
#         [0., 8., 0.]])
1.2.6 PyTorch Tensors—缩减和访问操作

一个张量的缩减操作是一个减少张量中包含的元素数量的操作。

import  torch
import numpy as np

t = torch.tensor([
    [0,1,0],
    [2,0,2],
    [0,3,0],
],dtype=torch.float32)

print(t.sum())  # 对张量中所有元素求和
# 结果:tensor(8.)
print(t.numel())    # 张量中元素的个数
# 结果:9
print(t.sum().numel())  # 对张量中所有元素求和后的元素的个数
# 结果:1
print(t.sum().numel() < t.numel())  # 对张量中所有元素求和后的元素的个数是否元素张量中元素的个数
# 结果:True
print(t.prod()) # 计算张量中所有元素的乘积
# 结果:tensor(0.)
print(t.mean()) # 所有元素求和后的平均值
# 结果:tensor(0.8889)
print(t.std())  # 张量的标准差
# 结果:tensor(1.1667)
t = torch.tensor([
    [1,1,1,1],
    [2,2,2,2],
    [3,3,3,3]
],dtype=torch.float32)

# 沿着第一个轴运行的元素是数组,沿着第二个轴运行的元素是数字
print(t.sum(dim=0))
# 结果:tensor([6., 6., 6., 6.])
# 以下便于理解dim=0
print('t[0]:',t[0],',t[1]:',t[1],',t[2]:',t[2],',t[0]+t[1]+t[2]=',t[0]+t[1]+t[2])
# 结果:t[0]: tensor([1., 1., 1., 1.]) ,t[1]: tensor([2., 2., 2., 2.]) ,t[2]: tensor([3., 3., 3., 3.]) ,t[0]+t[1]+t[2]= tensor([6., 6., 6., 6.])

# 以下便于理解dim=1
print(t.sum(dim=1))
# 结果:tensor([ 4.,  8., 12.])
print(t[0].sum())
# 结果:tensor(4.)
print(t[1].sum())
# 结果:tensor(8.)
print(t[2].sum())
# 结果:tensor(8.)
# tensor(12.)

ArgMax函数:当输入函数作为输入结果时,函数的最大值和输出值,所以ArgMax即获取一个张量内最大值的索引位置。

t = torch.tensor([
    [1,0,0,2],
    [0,3,3,0],
    [4,0,0,5]
],dtype=torch.float32)

print(t.max())
# 结果:tensor(5.)
print(t.argmax())
# 结果:tensor(11)
print(t.flatten())
# 结果:tensor([1., 0., 0., 2., 0., 3., 3., 0., 4., 0., 0., 5.])

print(t.max(dim=0)) # 返回值中第一个值时包含的最大值,第二值最大值的索引位置。
# 结果:torch.return_types.max(
# values=tensor([4., 3., 3., 5.]),
# indices=tensor([2, 1, 1, 2]))
print(t.argmax(dim=0))	# 最大值的索引位置。
# 结果:tensor([2, 1, 1, 2])
print(t.max(dim=1))
# 结果:torch.return_types.max(
# values=tensor([2., 3., 5.]),
# indices=tensor([3, 1, 3]))
print(t.argmax(dim=1))
# 结果:tensor([3, 1, 3])
t = torch.tensor([
    [1,2,3],
    [4,5,6],
    [7,8,9]
],dtype=torch.float32)

print(t.mean())
# 结果:tensor(5.)
print(t.mean().item())  # 将张量转变为一个数字,item()张量法只适用于标量值张量
# 结果:5.0
print(t.mean(dim=0).tolist())
# 结果:[4.0, 5.0, 6.0]
print(t.mean(dim=0).numpy())
# 结果:[4. 5. 6.]

2. 使用PyTorch的神经网络和深度学习

2.1深度学习数据和数据处理

2.1.1 Fashion MNIST 介绍
  • MNIST:有10个对应类,分别是0~9

  • Fashion-MNIST:有10个对应类,分别是10种类别的衣服,训练集有6万张图片,测试集有1万张图片,具有相同的图像大小,数据格式,训练和测试分割的结构。

    • 数据集中的物品操作:

      1. 将输入转换为PNG格式(拍照);
      2. 修剪,将图像修剪成为只包含物体的图像;
      3. 调整,将图像的最长边调整成为28;
      4. 锐化,对图像中物体的轮廓进行增强
      5. 扩展,将图像的最短边延伸到28,并将图像放在画布中心
      6. 取反,反转图像的像素强度
      7. 灰度化,将图像转换为8位灰度像素

      [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-M9afGAgu-1691155194192)(/image-20230720152838172-1689838121434-1.png)]

2.1.2 PyTorch torchvision介绍
  • torchvision是PyTorch的用于深度学习的计算机视觉包。

    • Datasets(像MNIST和Fashion-MNIST)
    • Models(模型)
    • Transforms(转换)
    • Utils(工具类)
    import torchvision  # 提供对流行的数据集、模型架构和计算机视觉的图像转换的访问的包
    import torchvision.transforms as transforms # 能够访问图像处理的通用转换
    
  • 完成项目的四个步骤:

    1. 准备数据
    2. 建立模型
    3. 训练模型
    4. 分析模型的结果
  • 准备数据

遵循ETL过程,描述将数据从源端经过提取(extract)、转换(transform)为适当的格式、加载(load)将数据加载到合适的结构中进行查询和分析至目的端的过程。

  1. 提取(Extract):从资源中获取Fashion-MNIST图像数据

  2. 转换(transform):将图像数据转换为一个pytoch张量

  3. 加载(load):将数据放入易于访问的对象中

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-c8Kp7yNJ-1691155194193)(/image-20230720160522676-1689840327188-3.png)]

    Dataset类:表示数据集的抽象类

    DataLoader类:封装数据集并提供对底层数据的访问

    import pandas as pd
    import torch
    import torch.utils.data as data
    
    # 自定义数据集,可以将self传递给DataLoader
    class OHLC(data.Dataset):
        def __init__(self,csv_file):
            self.data = pd.read_csv(csv_file)
    
        # 从数据集的一个特定索引位置获取一个元素
        def __getitem__(self, index):
            r = self.data.iloc[index]
            label = torch.tensor(r.is_up_day,dtype=torch.long)
            sample = self.normalize(torch.tensor([r.open,r.right,r.low,r.close]))
            return sample,label
    
        # 返回数据中集的长度
        def __len__(self):
            return len(self.data)
    
    train_set = torchvision.datasets.FashionMNIST(
        # 提取
        root='./data/FashionMNIST', # 路径,./表示此文件目录下
        train=True, # 训练参数,表示数据用于训练集
        download=True,  # 下载参数,若没有出现在指定的根目录,则下载这些数据
        # 转换
        transform=transforms.Compose([
             transforms.ToTensor()
        ])  # 变换参数,传递了一个转换的组合,转换在数据集元素上执行
    )
    
    # 加载
    train_loader = torch.utils.data.DataLoader(train_set)
    
2.1.3 PyTorch数据集和机器学习DataLoaders
import torch
import  torchvision
import  torchvision.transforms as transforms

train_set = torchvision.datasets.FashionMNIST(
    root='./data/FashionMNIST',
    train=True,
    download=True,
    transform=transforms.Compose([i
         transforms.ToTensor()
    ])
)

train_loader = torch.utils.data.DataLoader(
    train_set,batch_size=10
)

epochbatch_size解释:

epoch:全部的训练数据训练epoch次

batch_size:全部的训练数据,以每个batch_size为一组,共分(全部训练数据/batch_size)组。

例如:有6万张训练数据,batch_size=32,epoch=10,即:32个样本为1组,分为60000/32组,且将这60000个训练数据重复训练10次。

import numpy as np
import matplotlib.pyplot as plt

torch.set_printoptions(linewidth=120)   # 设置打印到控制台的pytorch输出的行宽

print(len(train_set))   # 训练集中图像的数量
# 结果:60000

print(train_set.train_labels)   # 对实际的类名或标签进行编码
# 结果:tensor([9, 0, 0,  ..., 3, 0, 5])

print(train_set.train_labels.bincount())    # bincount()会给出一个张量内的值的分布频率,即训练集中每个类对应的样本数量是一致的,即平衡数据集
# 结果:tensor([6000, 6000, 6000, 6000, 6000, 6000, 6000, 6000, 6000, 6000])
sample = next(iter(train_set))  #将一个训练集对象传递给python内置的内部函数,返回迭代器的下一个项目,注意这里是train_set
print(len(sample))  # 包含两个结果
# 结果:2
print(type(sample))
# 结果:<class 'tuple'>
image,label = sample
# image = sample[0]
# label = sample[1]
print(image.shape)
# 结果:torch.Size([1, 28, 28])

plt.imshow(image.squeeze(),cmap='gray')	# 将其颜色通道压缩
plt.show()
print('label:',label)
# 结果:label: 9

在这里插入图片描述

batch = next(iter(train_loader))  # 一次性获取一批数据,即batch_size个样本数据所组合成的张量
print(len(batch))
# 结果:2
print(type(batch))
# 结果:<class 'list'>
images,labels = batch
print(images.shape)
# 结果torch.Size([10, 1, 28, 28])
print(labels.shape)
# 结果:torch.Size([10])

# 使用torchvision创建网格,nrow=10,表示图像会沿着一行显示10个,nrow参数指定每一行图像的数量
grid = torchvision.utils.make_grid(images,nrow=10)
plt.figure(figsize=(15,15)) # figsize:指定figure的宽和高,单位为英寸
plt.imshow(np.transpose(grid,(1,2,0)))  # 转换通道格式,将[通道,长,宽]->(转变为)[长,宽,通道],原本顺序[0,1,2]->(转变为)[1,2,0]
plt.show()
print('labels:',labels)
# 结果:labels: tensor([9, 0, 0, 3, 0, 2, 7, 2, 5, 5])

在这里插入图片描述

在这里插入图片描述

train_loader = torch.utils.data.DataLoader(train_set,batch_size=100)
batch = next(iter(train_loader))
images,labels = batch
grid = torchvision.utils.make_grid(images,nrow=10)
plt.figure(figsize=(15,15))
plt.imshow(np.transpose(grid,(1,2,0)))
plt.show()
print('labels:',labels)
# 结果:labels: tensor([9, 0, 0, 3, 0, 2, 7, 2, 5, 5, 0, 9, 5, 5, 7, 9, 1, 0, 6, 4, 3, 1, 4, 8, 4, 3, 0, 2, 4, 4, 5, 3, 6, 6, 0, 8, 5,
#         2, 1, 6, 6, 7, 9, 5, 9, 2, 7, 3, 0, 3, 3, 3, 7, 2, 2, 6, 6, 8, 3, 3, 5, 0, 5, 5, 0, 2, 0, 0, 4, 1, 3, 1, 6, 3,
#         1, 4, 4, 6, 1, 9, 1, 3, 5, 7, 9, 7, 1, 7, 9, 9, 9, 3, 2, 9, 3, 6, 4, 1, 1, 8])

在这里插入图片描述

2.2 神经网络和深度学习

2.2.1 使用PyTorch建立卷积神经网络
  • 面向对象编程:
    • 类:同一种类抽象出的结果,如猫,狗,蜥蜴
      • 属性:该种类所具有的特性,如蜥蜴的颜色,体长等等
      • 方法:该种类会进行的动作,如蜥蜴会行走,会吃虫子,会变颜色等等
    • 对象:是通过类所进行的实例化,如一个蜥蜴叫小绿,一个蜥蜴叫小黄,他们两个对象具有不同的属性。
class Lizard:   # 指定类名
    def __init__(self,name):    # 类的构造方法,self参数是一个特殊的参数,使我们能够创建存储或封装类对象中的属性值
        self.name = name

    def set_name(self,name):
        self.name = name

lizard = Lizard('deep')
print(lizard.name)
# 结果:deep

lizard.set_name('lizard')
print(lizard.name)
# 结果:lizard
  • torch.nn包

    • Pytorch的神经网络库,包含了构建神经网络所需的所有典型组件。神经网络中的每一层都是两个主要组成部分,第一个是转换(代码表示),第二个是权重的集合(数据表示)。
    • 该包中有一个称为模块的特殊类,是所有神经网络模块的母类,即所有的层和PyTorch都扩展了nn.module类,并继承了所有PyTorch的内置功能,这些功能都来自nn.module类。
    • 将神经网络看作一个很大的层,层是函数,而神经网络是由一组函数组成的,其本身也是一个函数,所以通常来说层和网络是同一类型的对象,在PyTorch中,这些相似点在神经网络module基类中被捕获,意味着应该扩展module基类。
  • 使用PyTorch建立一个神经网络

    1. 创建一个神经网络类,继承nn,Module基础类
    2. 将网络的层定义为类属性
    3. 使用forward()方法前向传播
    # 没有继承nn.module,简单网络
    class Network:
        def __init__(self):
            self.layer = None
    
        def forward(self,t):
            t = self.layer(t)
            return t
    
    # 继承nn.module,使简单网络转换为PyTorch神经网络,将具有PyTorch中nn.module类中的所有功能
    # nn.module类可以保持跟踪每个层中包含的网络权重,当权重需要更新时,该特性在训练过程中会非常方便
    # 虚拟层作为属性
    class Network(nn.Module):
        def __init__(self):
            super(Network,self).__init()
            self.layer = None
            
        def forward(self,t):
            t = self.layer(t)
            return t
    
    # 卷积层和线性层作为属性
    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # implement the forward pass
            return t
    
    network = Network()
    print(network)
    # 结果:Network(
    #   (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
    #   (conv2): Conv2d(6, 12, kernel_size=(5, 5), stride=(1, 1))
    #   (fc1): Linear(in_features=192, out_features=120, bias=True)
    #   (fc2): Linear(in_features=120, out_features=60, bias=True)
    #   (out): Linear(in_features=60, out_features=10, bias=True)
    # )
    
2.2.2 PyTorch中CNN属性参数
import torch
import torch.nn as nn
from torch.nn import functional as F
class Linear(nn.Module):
    def __init__(self,in_features,out_features,bias=True):
        super(Linear,self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        # 每个层中的权重张量包含了随着网络在训练过程中学习而更新的权重值
        # Parameter对象将被注册到神经网络的参数类表中,并在反向传播时自动更新,反向传播需要被 optimizer 更新的,可以被训练。
        self.weight = Parameter(torch.Tensor(out_features,in_features))
        if bias:
            self.bias = Parameter(torch.Tensor(out_features))
        else:
            self.register_parameter('bias',None)
        # register_parameter(name, param)
        # 向module添加 parameter
        # 最大的区别:parameter可以通过注册时候的name获取。
        self.register_parameter()

    def forward(self,input):
        return F.linear(input,self.weight,self.bias)
  • parameter(形参):在函数定义中使用,可以将其看成是占位符,就像是函数内部的局部变量

    • 两种类型的parameter
      • 超参数:手动(不是派生的值)和任意选择的,这意味着作为神经网络程序员,选择超参数值主要基于试验错误,并且越来越多地利用过去被证明时有效的值

        • kernel_size:卷积核的大小
        • 卷积的输出通道(out_features)即卷积核(卷积滤波器)的数量,输出通道也有另一个名字,特征映射或特征图。若是线性层则不成为特征映射,由于输出是一个一阶张量,成为特征,所以有输出特征,而不是输出通道或者特征图。
      • 依赖于数据的超参数:实在网络的开始和网络的末端,即第一个卷积层的输入通道以及最后一个线性层的输出特征。

        • 第一个卷积层self.conv1的输入通道依赖于构成训练集的图像内部的彩色通道数量
        • 输出层的输出特征依赖于训练集中类的数量
        • 一层的输入是上一层的输出,所以卷积层的所有输入通道和线性层中的输入特征都依赖于来自上一层的数据
      • 可学习参数:实在训练过程中学习的参数,对于可学习参数,通常从一组任意值开始,当网络学习时,该值就会以迭代的方式更新(正在学习的网络,意思为该网络的可学习参数正在学习适当值,适当值即最小化损失函数的值)

        • 权重(weight):存在于每一层。
        class Network(nn.Module):
            def __init__(self):
                super(Network, self).__init__()
                self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
                self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
        
                self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
                self.fc2 = nn.Linear(in_features=120,out_features=60)
                self.out = nn.Linear(in_features=60,out_features=10)
        
            def forward(self, t):
                # implement the forward pass
                return t
        
        network = Network()
        print(network.conv1)
        # 结果:Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
        print(network.conv2)
        # 结果:Conv2d(6, 12, kernel_size=(5, 5), stride=(1, 1))
        print(network.fc1)
        # 结果:Linear(in_features=192, out_features=120, bias=True)
        print(network.fc2)
        # 结果:Linear(in_features=120, out_features=60, bias=True)
        print(network.out)
        # 结果:Linear(in_features=60, out_features=10, bias=True)
        print(network)
        # 结果:Network(
        #   (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
        #   (conv2): Conv2d(6, 12, kernel_size=(5, 5), stride=(1, 1))
        #   (fc1): Linear(in_features=192, out_features=120, bias=True)
        #   (fc2): Linear(in_features=120, out_features=60, bias=True)
        #   (out): Linear(in_features=60, out_features=10, bias=True)
        # )
        print(network.conv1.weight.shape)
        # 结果:torch.Size([6, 1, 5, 5])
        print(network.conv2.weight.shape)
        # 结果:torch.Size([12, 6, 5, 5])
        print(network.conv2.weight[0].shape)
        # 结果:torch.Size([6, 5, 5])
        print(network.fc1.weight.shape)
        # 结果:torch.Size([120, 192])
        print(network.fc2.weight.shape)
        # 结果:torch.Size([60, 120])
        print(network.out.weight.shape)
        # 结果:torch.Size([10, 60])
        

        conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)

        conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        print(network.conv1.weight.shape)

        结果:torch.Size([6, 1, 5, 5])

        print(network.conv2.weight.shape)

        结果:torch.Size([12, 6, 5, 5])

        class Network():    # nn.Module
            def __init__(self):
                # super(Network, self).__init__()
                self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
                self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
        
                self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
                self.fc2 = nn.Linear(in_features=120,out_features=60)
                self.out = nn.Linear(in_features=60,out_features=10)
        
            def forward(self, t):
                # implement the forward pass
                return t
        
        network = Network()
        print(network)
        # 结果:<__main__.Network object at 0x000002DCE21F0C40>
        
        class Network():    # nn.Module
            def __init__(self):
                # super(Network, self).__init__()
                self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
                self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
        
                self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
                self.fc2 = nn.Linear(in_features=120,out_features=60)
                self.out = nn.Linear(in_features=60,out_features=10)
        
            def forward(self, t):
                # implement the forward pass
                return t
        
            def __repr__(self):
                return "lizardnet"
        
        network = Network()
        print(network)
        # 结果:lizardnet
        
  • argument(实参):当函数被调用是传递给函数的实际值,有函数调用者从外部分配给这些变量的值

2.2.3 PyTorch中CNN前向传播实现
class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
        self.fc2 = nn.Linear(in_features=120,out_features=60)
        self.out = nn.Linear(in_features=60,out_features=10)

    def forward(self, t):
        # implement the forward pass
        return t

network = Network()
in_features = torch.tensor([1,2,3,4],dtype=torch.float32)

weight_matrix = torch.tensor([
    [1,2,3,4],
    [2,3,4,5],
    [3,4,5,6]
],dtype=torch.float32)

print(weight_matrix.matmul(in_features))
# 结果:tensor([30., 40., 50.])
# 展示所有的参数
for param in network.parameters():
    print(param.shape)
# 结果:tensor([30., 40., 50.])
# torch.Size([6, 1, 5, 5])
# torch.Size([6])
# torch.Size([12, 6, 5, 5])
# torch.Size([12])
# torch.Size([120, 192])
# torch.Size([120])
# torch.Size([60, 120])
# torch.Size([60])
# torch.Size([10, 60])
# torch.Size([10])


for name,param in network.named_parameters():
    print(name,'\t\t',param.shape)
# 结果:conv1.weight 		 torch.Size([6, 1, 5, 5])
# conv1.bias 		 torch.Size([6])
# conv2.weight 		 torch.Size([12, 6, 5, 5])
# conv2.bias 		 torch.Size([12])
# fc1.weight 		 torch.Size([120, 192])
# fc1.bias 		 torch.Size([120])
# fc2.weight 		 torch.Size([60, 120])
# fc2.bias 		 torch.Size([60])
# out.weight 		 torch.Size([10, 60])
# out.bias 		 torch.Size([10])
# pytorch会自动创建矩阵,并使用随机的值来初始化,凯明均匀初始化
fc = nn.Linear(in_features=4,out_features=3)    # 创建一个3x4的权重矩阵
print(fc(in_features))
# 结果:tensor([-0.7972, -3.0251, -3.2511], grad_fn=<AddBackward0>)

fc.weight = nn.Parameter(weight_matrix)
# 为什么获得的不是精确值,由于线性层输出中增加了偏置张量
print(fc(in_features))
# 结果:tensor([29.5836, 40.0824, 49.6866], grad_fn=<AddBackward0>)
fc = nn.Linear(in_features=4,out_features=3,bias=False)
fc.weight = nn.Parameter(weight_matrix)
print(fc(in_features))
# 结果:tensor([30., 40., 50.], grad_fn=<MvBackward0>)

KaTeX parse error: Expected 'EOF', got '&' at position 32: …b\\Variable(变量)&̲&Definition(定义)…

  • layer(input):层的对象实例是一个函数,如上述fc(in_features),这是由于pytorch模块类实现了一种特殊的python方法__call__(input).

  • __call__(input):每当调用对象实例时,都会调用特殊调用方法,由于特殊的调用方法与层和网络的前向方法相互作用,我们不直接调用forward(input)(前向方法),而是调用对象实例,调用对象实例之后,__call__(input)(调用方法)在pytorch下调用,而调用方法再调用forward(input)(前向方法)

class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
        self.fc2 = nn.Linear(in_features=120,out_features=60)
        self.out = nn.Linear(in_features=60,out_features=10)

    def forward(self, t):
        # (1) input layer
        t = t

        # (2) hidden conv layer
        t = self.conv1(t)
        t = F.relu(t)
        t = F.max_pool2d(t,kernel_size = 2,stride = 2)

        # (3) hidden conv layer
        t = self.conv2(t)
        t = F.relu(t)
        t = F.max_pool2d(t, kernel_size=2, stride=2)

        # (4) hidden liner layer
        t = t.reshape(-1,12 * 4 * 4)
        t = self.fc1(t)
        t = F.relu(t)

        # (5) hidden liner layer
        t = self.fc2(t)
        t = F.relu(t)
        
        # (6) output layer
        t = self.out(t)
        # t = F.softmax(t,dim=1)
        # 不使用sorfmax函数原因:如果在训练过程中使用的损失函数,将从nn.functional中使用交叉熵损失函数,它在其输入上隐式地执行了一个softmax操作
        return t
  1. 输入一个[1,28,28]的张量
  2. 经过卷积层和池化层,28x28高度和宽度减少到4x4
  3. 传递给线性层12个输出通道的4x4的高度和宽度
2.2.4 前向传播介绍|单个图像传递给神经网络
  • 卷积层
    1. 操作过程:卷积运算又被称为互相关运算,将图像矩阵中,从左到右,从上到下,取与卷积核同等大小的一部分,每一部分中的值与卷积核中的值对应相乘后求和,最后的结果组成一个矩阵。
    2. 作用:提取输入的不同特征,通过滑动窗口得到特征图像
  • 池化层
    1. 操作过程:池化层的操作是将一个窗口内的像素按照平均值加权或选择最大值来作为输出,一个窗口内仅有一个输出数据。因此,当经过池化层后,图像的尺寸会变小,计算量也会变小,相比于在池化前使用卷积,池化后同样的卷积大小具有更大的感受野。
    2. 池化的目的:保持特征不变性、特征维度下降(特征维度下降:一个图像包含的信息很大,特征很多,有些信息在执行图像任务时很少使用,或是有重复。池化可以去掉这些冗余的信息,提取最重要的特征。 但是特征越多,模型就会拟合这些特征,导致模型泛化能力下降,因此进行多层池化后,以前的特征维数减少,训练参数减少,泛化能力提高,进而防止过拟合。)
      • 最大池化:选图像区域最大值作为该区域池化后的值。(最大池化具有去除冗余信息、去除噪声的作用,更好保留纹理特征。)
      • 平均池化:计算图像区域平均值作为该区域池化后的值。(平均池化能保留整体数据的特征,较好的突出背景信息,避免丢失信息。)
      • 使用情景:对于一张图,在识别或者处理的时候,如果需要的是背景,或者说是整幅图的一个相对平均的情况那么用平均池化比较好。如果需要将图像中的一些物体特征提取出来,那么用最大池化好。
    3. 池化的作用:特征降维、压缩数据和参数的数量,减小过拟合。
    4. 池化的结果:特征减少、参数减少。
  • 归一化层
    1. 归一化:归一化值数据减去均值除以方差,变成均值为0方差为1的正态分布。
    2. 归一化的目的:在机器学习领域中,特征向量中的不同特征往往具有不同的量纲和量纲单位,这样导致数量级相差过大,计算起来大数的变化会掩盖掉较小数据的变化,同时在进行梯度下降时如果学习率较大会难以找到最优点,收敛缓慢,需要对数据进行数据标准化处理,以解决数据指标之间的可比性,因此,进行归一化,使数据被限定在一定的范围内,加快梯度下降求最优解的速度,避免设置较大学习率而导致网络无法收敛的风险,提高网络稳定性。
    3. 批归一化(Batch Normalization,BN):将送入网络的每一个Batch进行归一化,使得每个Batch数据的特征都具有均值为0,方差为1的分布。(BN层通过对数据进行归一化,避免数据出现很大数量级差,防止了小的改变通过多层网络传播被放大的问题,可以有效防止梯度消失和梯度爆炸。)
  • 激活函数
    • 激活层的提出主要是为了解决神经网络的线性不可分问题。如果没有激活层,无论神经网络有多少层,输出都是输入的线性组合,网络的逼近能力十分有限,因此需要在每一个隐藏层后加一个激活层,引入非线性因素
  • Flatten层
    • Flatten层用来将输入“压平”,即把卷积层输出的多维特征拉为一维向量,常用于从卷积层到全连接层的过渡。
  • 全连接层
    • 全连接层的作用:对数据进行分类
  • 总结:卷积层、池化层、激活函数层将原始数据映射到隐层特征空间;全连接层将学到的分布式特征表示映射到样本标记空间。
import torch
import torch.nn as nn
import torch.nn.functional as F

import torchvision
import torchvision.transforms as transforms

torch.set_printoptions(linewidth=120)

train_set = torchvision.datasets.FashionMNIST(
    root="./data/FashionMNIST",
    train=True,
    download=True,
    transform=transforms.Compose([
         transforms.ToTensor()
    ])
)

class NetWork(nn.Module):
    def __init__(self):
        super(NetWork,self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
        self.fc2 = nn.Linear(in_features=120,out_features=60)
        self.out = nn.Linear(in_features=60,out_features=10)

    def forward(self,t):
        t = F.relu(self.conv1(t))
        t = F.max_pool2d(t,kernel_size=2,stride=2)

        t = F.relu(self.conv2(t))
        t = F.max_pool2d(t,kernel_size=2,stride=2)

        t = F.relu(self.fc1(t.reshape(-1,12*4*4)))
        t = F.relu(self.fc2(t))
        t = self.out(t)

        return t

# 关闭梯度计算
torch.set_grad_enabled(False)

network = NetWork()

sample = next(iter(train_set))

image,label = sample
print(image.shape)  # 图像为单通道28x28的大小
# 结果:torch.Size([1, 28, 28])

print(image.unsqueeze(0).shape) # 给了一个大小为1的batch
# 结果:torch.Size([1, 1, 28, 28])
# 未训练权重前(权重随机生成)的预测
pred = network(image.unsqueeze(0))  # 卷积的输入需为[batch_size,通道数,高度,宽度]
print(pred.shape)
# 结果:torch.Size([1, 10])
print(pred)
# 结果:tensor([[-0.1324,  0.1154, -0.0332, -0.0438,  0.1242,  0.1578, -0.1197, -0.0302, -0.0422, -0.0652]])
print(label)
# 结果:9
print(pred.argmax(dim=1))
# 结果:tensor([5])

print(F.softmax(pred,dim=1))
# 结果:tensor([[0.0878, 0.1125, 0.0969, 0.0959, 0.1135, 0.1173, 0.0889, 0.0972, 0.0961, 0.0939]])
print(F.softmax(pred,dim=1).sum())
# 结果:tensor(1.0000)
2.2.5 神经网络批处理|批处理图像
import torch
import torch.nn as nn
import torch.nn.functional as F

import torchvision
import torchvision.transforms as transforms

torch.set_printoptions(linewidth=120)

# Check blog post on deeplizard.com for any version related updates
print(torch.__version__)
print(torchvision.__version__)
# 结果:1.12.0+cu116
# 0.13.0+cu116

train_set = torchvision.datasets.FashionMNIST(
    root="./data/FashionMNIST",
    train=True,
    download=True,
    transform=transforms.Compose([
        transforms.ToTensor()
    ])
)

class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
        self.fc2 = nn.Linear(in_features=120,out_features=60)
        self.out = nn.Linear(in_features=60,out_features=10)

    def forward(self, t):
        # (1) input layer
        t = t

        # (2) hidden conv layer
        t = self.conv1(t)
        t = F.relu(t)
        t = F.max_pool2d(t,kernel_size = 2,stride = 2)

        # (3) hidden conv layer
        t = self.conv2(t)
        t = F.relu(t)
        t = F.max_pool2d(t, kernel_size=2, stride=2)

        # (4) hidden liner layer
        t = t.reshape(-1,12 * 4 * 4)
        t = self.fc1(t)
        t = F.relu(t)

        # (5) hidden liner layer
        t = self.fc2(t)
        t = F.relu(t)

        # (6) output layer
        t = self.out(t)
        # t = F.softmax(t,dim=1)
        # 不使用sorfmax函数原因:如果在训练过程中使用的损失函数,将从nn.functional中使用交叉熵损失函数,它在其输入上隐式地执行了一个softmax操作

        return t

torch.set_grad_enabled(False)

network = Network()

data_loader = torch.utils.data.DataLoader(
    train_set,
    batch_size=10
)

batch = next(iter(data_loader))

images,labels = batch

print(images.shape)
# 结果:torch.Size([10, 1, 28, 28])
print(labels.shape)
# 结果:torch.Size([10])

preds = network(images)

print(preds.shape)
# 结果:torch.Size([10, 10])
print(preds)
# 结果:tensor([[ 0.0090,  0.0891,  0.1622, -0.0314,  0.0394, -0.0832,  0.0788, -0.0260,  0.0835,  0.0782],
#         [ 0.0043,  0.0846,  0.1518, -0.0333,  0.0342, -0.0884,  0.0698, -0.0308,  0.0689,  0.0695],
#         [ 0.0159,  0.0730,  0.1379, -0.0434,  0.0511, -0.0726,  0.0750, -0.0324,  0.0673,  0.0699],
#         [ 0.0099,  0.0758,  0.1383, -0.0378,  0.0524, -0.0778,  0.0836, -0.0304,  0.0688,  0.0621],
#         [ 0.0004,  0.0794,  0.1470, -0.0228,  0.0460, -0.0910,  0.0880, -0.0290,  0.0718,  0.0599],
#         [ 0.0044,  0.0876,  0.1556, -0.0292,  0.0384, -0.0845,  0.0777, -0.0335,  0.0789,  0.0788],
#         [ 0.0123,  0.0847,  0.1607, -0.0254,  0.0384, -0.0856,  0.0848, -0.0115,  0.0635,  0.0697],
#         [ 0.0093,  0.0893,  0.1577, -0.0305,  0.0344, -0.0838,  0.0789, -0.0276,  0.0794,  0.0801],
#         [ 0.0114,  0.0820,  0.1404, -0.0493,  0.0511, -0.0696,  0.0725, -0.0337,  0.0610,  0.0713],
#         [ 0.0140,  0.0858,  0.1393, -0.0462,  0.0439, -0.0766,  0.0738, -0.0410,  0.0784,  0.0762]])
print(preds.argmax(dim=1).shape)
# 结果:torch.Size([10])
print(preds.argmax(dim=1))
# 结果:tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
print(labels)
# 结果:tensor([9, 0, 0, 3, 0, 2, 7, 2, 5, 5])
print(preds.argmax(dim=1).eq(labels))
# 结果:tensor([False, False, False, False, False,  True, False,  True, False, False])
print(preds.argmax(dim=1).eq(labels).sum()) # 预测值预测中数量
# 结果:tensor(2)

def get_num_correct(preds,labels):
    return preds.argmax(dim=1).eq(labels).sum().item()

print(get_num_correct(preds,labels))
# 结果:2
2.2.6 卷积神经网络张量变换

Debug以下代码来观察网络的变换:

import torch
import torch.nn as nn
import torch.nn.functional as F

import torchvision
import torchvision.transforms as transforms
class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

        self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
        self.fc2 = nn.Linear(in_features=120,out_features=60)
        self.out = nn.Linear(in_features=60,out_features=10)

    def forward(self, t):
        # (1) input layer
        t = t

        # (2) hidden conv layer
        t = self.conv1(t)
        t = F.relu(t)
        t = F.max_pool2d(t,kernel_size = 2,stride = 2)

        # (3) hidden conv layer
        t = self.conv2(t)
        t = F.relu(t)
        t = F.max_pool2d(t, kernel_size=2, stride=2)

        # (4) hidden liner layer
        t = t.reshape(-1,12 * 4 * 4)
        t = self.fc1(t)
        t = F.relu(t)

        # (5) hidden liner layer
        t = self.fc2(t)
        t = F.relu(t)

        # (6) output layer
        t = self.out(t)
        # t = F.softmax(t,dim=1)
        
        return t


network = Network()

train_set = torchvision.datasets.FashionMNIST(
    root="./data/FashionMNIST",
    train=True,
    download=True,
    transform=transforms.Compose([
        transforms.ToTensor()
    ])
)

sample = next(iter(train_set))
image,label = sample

output = network(image.unsqueeze(0))
print(output)
  1. 经过(1)后t.shape为[1,1,28,28]
  2. 经过(2)的卷积层后t.shape为[1,6,24,24],激活函数没变化,经过池化层后t.shape为[1,6,12,12]
  3. 经过(3)的卷积层后t.shape为[1,12,8,8],激活函数没变化,经过池化层后t.shape为[1,12,4,4]
  4. 经过(4)的重塑后t.shape为[1,192],经过全连接层t.shape变为[1,120],激活函数没变化
  5. 经过(5)的全连接层t.shape变为[1,60],激活函数没变化
  6. 经过(6)的输出层t.shape变为[1,10]
  • CNN输出大小公式(长和宽等长)
    n × n 的输入 f × f 的滤波器(卷积核) p 个填充和 s 的步长 输出大小 O : O = n − f + 2 p s + 1 n \times{n}的输入\\ f \times{f}的滤波器(卷积核)\\ p个填充和s的步长\\ 输出大小O:\\ O = \frac{n-f+2p}{s}+1 n×n的输入f×f的滤波器(卷积核)p个填充和s的步长输出大小OO=snf+2p+1

  • CNN输出大小公式(长和宽不等长)
    n h × n w 的输入 f h × f w 的滤波器(卷积核) p 个填充和 s 的步长 输出高度大小 O h : O h = n h − f h + 2 p s + 1 输出宽度大小 O w : O w = n w − f h + 2 p s + 1 n_h \times{n_w}的输入\\ f_h \times{f_w}的滤波器(卷积核)\\ p个填充和s的步长\\ 输出高度大小O_h:\\ O_h = \frac{n_h-f_h+2p}{s}+1\\ 输出宽度大小O_w:\\ O_w = \frac{n_w-f_h+2p}{s}+1 nh×nw的输入fh×fw的滤波器(卷积核)p个填充和s的步长输出高度大小OhOh=snhfh+2p+1输出宽度大小OwOw=snwfh+2p+1

2.3 训练神经网络

2.3.1 使用PyTorch训练卷积神经网络
  • 训练步骤

    1. 从训练集中获取一批数据
    2. 把数据传递给网络
    3. 计算损失(网络返回预测值与真实值之间的差异)
    4. 计算损失函数的梯度和网络的权重值
    5. 使用梯度减少损失来更新权重
    6. 重复1~5的步骤,直到一个epoch结束
    7. 重复1~6很多epoch次,获得期望的精确度。
  • 基本准备(导包+网络+数据+数据加载器+实例对象)
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    
    import torchvision
    import torchvision.transforms as transforms
    
    torch.set_printoptions(linewidth=120)   # 展示输出选项
    torch.set_grad_enabled(True)    # 默认为True
    
    print(torch.__version__)
    print(torchvision.__version__)
    # 结果:1.12.0+cu116
    # 0.13.0+cu116
    
    def get_num_correct(preds,labels):
        return preds.argmax(dim=1).eq(labels).sum().item()
    
    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # (1) input layer
            t = t
    
            # (2) hidden conv layer
            t = self.conv1(t)
            t = F.relu(t)
            t = F.max_pool2d(t,kernel_size = 2,stride = 2)
    
            # (3) hidden conv layer
            t = self.conv2(t)
            t = F.relu(t)
            t = F.max_pool2d(t, kernel_size=2, stride=2)
    
            # (4) hidden liner layer
            t = t.reshape(-1,12 * 4 * 4)
            t = self.fc1(t)
            t = F.relu(t)
    
            # (5) hidden liner layer
            t = self.fc2(t)
            t = F.relu(t)
    
            # (6) output layer
            t = self.out(t)
            # t = F.softmax(t,dim=1)
    
            return t
    
    train_set = torchvision.datasets.FashionMNIST(
        root="./data/FashionMNIST",
        train=True,
        download=True,
        transform=transforms.Compose([
            transforms.ToTensor()
        ])
    )
    
    network = Network()
    
    train_loader = torch.utils.data.DataLoader(train_set,batch_size = 100)
    batch = next(iter(train_loader))
    images,labels = batch
    
    
  • 计算损失
    # 计算损失
    preds = network(images)
    loss = F.cross_entropy(preds,labels)
    print(loss.item())
    # 结果:2.302997350692749
    
  • 计算梯度
    # 计算梯度
    print(network.conv1.weight.grad)
    # 结果:None
    loss.backward() # Calculating the gradients,反向传播
    print(network.conv1.weight.grad.shape)
    # 结果:torch.Size([6, 1, 5, 5])
    

    梯度张量和权重张量具有相同的形状,所以对于权重张量的每个元素都有一个对应的梯度。

    1. 当将图像传入网络时,这些图像是通过正向方法中定义的函数流动的,在此期间,在后面的pytorch一直在跟踪这些计算。
    2. 当对最后一个张量进行反向调用时,因为预测张量来自网络,预测张量有之前所有的计算结果,导致了梯度的诞生,然后用预测张量来计算最后一个张量,这些操作都是在幕后进行的,所以pytorch记录了所有在损失张量的创建过程中发生的计算。
    3. 当在最后一个张量上调用反向传播时,途中的每个张量的梯度都可以计算出来。下一步使用这些梯度,并使用它们来更新网络的权重。
  • 更新权重
    # 更新权重
    optimizer = optim.Adam(network.parameters(),lr=0.01)
    print(loss.item())
    # 结果:2.297093629837036
    print(get_num_correct(preds,labels))
    # 结果:11
    optimizer.step()    # 更新权重
    
    preds = network(images)
    loss = F.cross_entropy(preds,labels)
    print(loss.item())
    # 结果:2.2647159099578857
    print(get_num_correct(preds,labels))
    # 结果:11
    
  • 单批训练
    network = Network()
    
    train_loader = torch.utils.data.DataLoader(train_set,batch_size = 100)
    optimizer = optim.Adam(network.parameters(),lr=0.01)
    
    batch = next(iter(train_loader))    # Get Batch
    images,labels = batch
    
    preds = network(images) # Pass Batch
    loss = F.cross_entropy(preds,labels)    # Calculate Loss
    
    loss.backward() # Calculate Graduents
    optimizer.step()    # Updata weights
    
    #-------------------------------------------
    
    print('loss1:',loss.item())
    # 结果:loss1: 2.314021110534668
    preds = network(images)
    loss = F.cross_entropy(preds,labels)
    print('loss2:',loss.item())
    # 结果:loss2: 2.288311243057251
    

    朝着损失函数减小的方向运行时,学习率(lr)告诉优化器在减小的方向上走一步有多远,所以当使用这个参数时要记住的时要确保不要在最小方向上走太远

  • 训练全部数据(全部batch)
    network = Network()
    
    train_loader = torch.utils.data.DataLoader(train_set,batch_size = 100)
    optimizer = optim.Adam(network.parameters(),lr=0.01)
    
    total_loss = 0
    total_correct = 0
    
    for batch in train_loader:  # Get Batch
        batch = next(iter(train_loader))  # Get Batch
        images, labels = batch
    
        preds = network(images)  # Pass Batch
        loss = F.cross_entropy(preds, labels)  # Calculate Loss
    
        optimizer.zero_grad() # 将上一次循环中的权重梯度为0(,由于PyTorch会积累梯度(累加)
        loss.backward()  # Calculate Graduents
        optimizer.step()  # Updata weights
    
        total_loss += loss.item()
        total_correct += get_num_correct(preds,labels)
    
    print("epoch:",0,"total_correct:",total_correct,"loss:",total_loss)
    # 结果:epoch: 0 total_correct: 58461 loss: 41.013087468398226
    print(total_correct/len(train_set))
    # 结果:0.97435
    
  • 重复训练全部数据(epoch)
    network = Network()
    
    train_loader = torch.utils.data.DataLoader(train_set,batch_size = 100)
    optimizer = optim.Adam(network.parameters(),lr=0.01)
    
    for epoch in range(5):
        total_loss = 0
        total_correct = 0
    
        for batch in train_loader:  # Get Batch
            images, labels = batch
    
            preds = network(images)  # Pass Batch
            loss = F.cross_entropy(preds, labels)  # Calculate Loss
    
            optimizer.zero_grad() # 将上一次循环中的权重梯度为0(,由于PyTorch会积累梯度(累加)
            loss.backward()  # Calculate Graduents
            optimizer.step()  # Updata weights
    
            total_loss += loss.item()
            total_correct += get_num_correct(preds,labels)
    
        print("epoch:",epoch,"total_correct:",total_correct,"loss:",total_loss)
        # 结果:epoch: 0 total_correct: 46207 loss: 359.2975911796093
        # epoch: 1 total_correct: 51329 loss: 234.99475240707397
        # epoch: 2 total_correct: 52012 loss: 213.1382188051939
        # epoch: 3 total_correct: 52459 loss: 202.43612889945507
        # epoch: 4 total_correct: 52569 loss: 198.35370056331158
    print(total_correct/len(train_set))
    # 结果:0.87615
    
2.3.2 使用混淆矩阵分析CNN的结果

混淆矩阵:ROC曲线绘制的基础,同时它也是衡量分类型模型准确度中最基本,最直观,计算最简单的方法。即混淆矩阵就是分别统计分类模型归错类,归对类的观测值个数,然后把结果放在一个表里展示出来。这个表就是混淆矩阵。

  • 基本代码
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    
    import torchvision
    import torchvision.transforms as transforms
    
    torch.set_printoptions(linewidth=120)   # 展示输出选项
    torch.set_grad_enabled(True)    # 默认为True
    
    print(torch.__version__)
    print(torchvision.__version__)
    # 结果:1.12.0+cu116
    # 0.13.0+cu116
    
    def get_num_correct(preds,labels):
        return preds.argmax(dim=1).eq(labels).sum().item()
    
    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # (1) input layer
            t = t
    
            # (2) hidden conv layer
            t = self.conv1(t)
            t = F.relu(t)
            t = F.max_pool2d(t,kernel_size = 2,stride = 2)
    
            # (3) hidden conv layer
            t = self.conv2(t)
            t = F.relu(t)
            t = F.max_pool2d(t, kernel_size=2, stride=2)
    
            # (4) hidden liner layer
            t = t.reshape(-1,12 * 4 * 4)
            t = self.fc1(t)
            t = F.relu(t)
    
            # (5) hidden liner layer
            t = self.fc2(t)
            t = F.relu(t)
    
            # (6) output layer
            t = self.out(t)
            # t = F.softmax(t,dim=1)
    
            return t
    
    train_set = torchvision.datasets.FashionMNIST(
        root="./data/FashionMNIST",
        train=True,
        download=True,
        transform=transforms.Compose([
            transforms.ToTensor()
        ])
    )
    
    network = Network()
    
    train_loader = torch.utils.data.DataLoader(train_set,batch_size = 100)
    optimizer = optim.Adam(network.parameters(),lr=0.01)
    
    for epoch in range(5):
        total_loss = 0
        total_correct = 0
    
        for batch in train_loader:  # Get Batch
            images, labels = batch
    
            preds = network(images)  # Pass Batch
            loss = F.cross_entropy(preds, labels)  # Calculate Loss
    
            optimizer.zero_grad() # 将上一次循环中的权重梯度为0(,由于PyTorch会积累梯度(累加)
            loss.backward()  # Calculate Graduents
            optimizer.step()  # Updata weights
    
            total_loss += loss.item()
            total_correct += get_num_correct(preds,labels)
    
        print("epoch:",epoch,"total_correct:",total_correct,"loss:",total_loss)
    
    print(total_correct/len(train_set))
    
    
  • 新增方法:
    print(len(train_set))
    # 结果:60000
    print(len(train_set.targets))
    # 结果:60000
    
    # 从数据集中获取预测
    def get_all_preds(model,loader):
        all_preds = torch.tensor([])
        for batch in loader:
            images,labels = batch
    
            preds = model(images)
            all_preds = torch.cat((all_preds,preds),dim=0)
    
        return all_preds
    
  • 默认开启求梯度,增加开销
    prediction_loader = torch.utils.data.DataLoader(train_set,batch_size = 10000)
    train_preds = get_all_preds(network,prediction_loader)
    
    print(train_preds.shape)
    # 结果:torch.Size([60000, 10])
    print(train_preds)
    # 结果:tensor([[ 0.1352, -0.1020, -0.1569,  ..., -0.0831,  0.0444, -0.0343],
    #         [ 0.1242, -0.0976, -0.1583,  ..., -0.0895,  0.0327, -0.0250],
    #         [ 0.1218, -0.0962, -0.1291,  ..., -0.0707,  0.0473, -0.0393],
    #         ...,
    #         [ 0.1176, -0.0998, -0.1529,  ..., -0.0833,  0.0333, -0.0283],
    #         [ 0.1208, -0.0947, -0.1315,  ..., -0.0722,  0.0491, -0.0348],
    #         [ 0.1241, -0.0935, -0.1395,  ..., -0.0784,  0.0450, -0.0375]], grad_fn=<CatBackward0>)
    print(train_preds.requires_grad)    # True意味着这个特定张量需要pytorch的梯度跟踪特性,在此有些多余浪费电脑资源,因为我们在这里并没有训练,所以用不到该功能
    # 结果:True
    print(train_preds.grad) # 虽然上面是True,但是梯度并没有任何值
    # 结果:None
    print(train_preds.grad_fn)  # 因为有值,所以代表着train_preds的图形被跟踪,即当在做预测时,也被称为推理,跟踪图形会造成额外的系统开销
    # 结果:<CatBackward0 object at 0x00000264BD4C55E0>
    
  • 解决方法:
    • 方法一:

      # 需要在不跟踪梯度的情况下得到预测或者不创建图形
      with torch.no_grad(): # 局部关闭梯度跟踪的方法
          prediction_loader = torch.utils.data.DataLoader(train_set, batch_size=10000)
          train_preds = get_all_preds(network, prediction_loader)
      
      print(train_preds.requires_grad)
      # 结果:False
      print(train_preds.grad)
      # 结果:None
      print(train_preds.grad_fn)
      # 结果:None
      
    • 方法二:

      @torch.no_gard()
      def get_all_preds(model,loader):
          all_preds = torch.tensor([])
          for batch in loader:
              images,labels = batch
      
              preds = model(images)
              all_preds = torch.cat((all_preds,preds),dim=0)
      
          return all_preds
      
  • 预测正确数量
    preds_correct = get_num_correct(train_preds,train_set.targets)
    
    print('total correct:',preds_correct)
    # 结果:total correct: 6000
    print('accuracy:',preds_correct/len(train_set))
    # 结果:accuracy: 0.1
    
  • 建立混淆矩阵
    print(train_set.targets)
    # 结果:tensor([9, 0, 0,  ..., 3, 0, 5])
    print(train_preds.argmax(dim=1))
    # 结果:tensor([9, 9, 9,  ..., 9, 9, 9])
    
    stacked = torch.stack(
        (
            train_set.targets,
            train_preds.argmax(dim=1)
        ),
        dim=1
    )
    
    print(stacked.shape)
    # 结果:torch.Size([60000, 2])
    print(stacked)
    # 结果:tensor([[9, 9],
    #        [0, 0],
    #        [0, 0],
    #         ...,
    #        [3, 3],
    #        [0, 0],
    #        [5, 5]])
    print(stacked[0].tolist())
    # 结果:[9, 9]
    
    
    cmt = torch.zeros(10,10,dtype=torch.int64)
    print(cmt)
    # 结果:tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    #         [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
    
    for p in stacked:
        tl,pl = p.tolist()
        cmt[tl,pl] = cmt[tl,pl] + 1
    
    print(cmt)
    # 结果:tensor([[5485,    1,  103,  143,   17,    5,  221,    0,   25,    0],
    #         [  15, 5764,   14,  189,    9,    2,    4,    0,    3,    0],
    #         [  38,    1, 4574,   95,  979,    2,  301,    0,   10,    0],
    #         [ 174,    7,   13, 5657,   74,    0,   69,    0,    6,    0],
    #         [  12,    0,  214,  421, 5086,    0,  260,    0,    7,    0],
    #         [   3,    0,    0,    0,    0, 5655,    1,  300,   10,   31],
    #         [1384,    3,  486,  193,  698,    1, 3191,    0,   44,    0],
    #         [   0,    0,    0,    0,    0,    6,    0, 5946,    2,   46],
    #         [  36,    0,   12,   23,   22,    1,   53,    8, 5845,    0],
    #         [   0,    0,    0,    0,    0,   33,    0,  502,    8, 5457]])
    
  • 画混淆矩阵
    import matplotlib.pyplot as plt
    
    from sklearn.metrics import confusion_matrix
    from resources.plotcm import plot_confusion_matrix
    
    cm = confusion_matrix(train_set.targets,train_preds.argmax(dim=1))
    print(type(cm))
    # 结果:<class 'numpy.ndarray'>
    print(cm)
    # 结果:[[5485    1  103  143   17    5  221    0   25    0]
    #  [  15 5764   14  189    9    2    4    0    3    0]
    #  [  38    1 4574   95  979    2  301    0   10    0]
    #  [ 174    7   13 5657   74    0   69    0    6    0]
    #  [  12    0  214  421 5086    0  260    0    7    0]
    #  [   3    0    0    0    0 5655    1  300   10   31]
    #  [1384    3  486  193  698    1 3191    0   44    0]
    #  [   0    0    0    0    0    6    0 5946    2   46]
    #  [  36    0   12   23   22    1   53    8 5845    0]
    #  [   0    0    0    0    0   33    0  502    8 5457]]
    names = ('T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle boot')
    plt.figure(figsize=(10,10))
    plot_confusion_matrix(cm,names)
    plt.show()
    

在这里插入图片描述

3. 附加学习

3.1 拼接和叠加

3.1.1 Pytorch
  • 拼接:是在一个现有的轴上连接一系列张量

  • 堆叠:是在一个新的轴上连接一系列张量

    import torch
    
    t1 = torch.tensor([1,1,1])
    
    print(t1.unsqueeze(dim=0))
    # 结果:tensor([[1, 1, 1]])
    print(t1.unsqueeze(dim=1))
    # 结果:tensor([[1],
    #         [1],
    #         [1]])
    
    print(t1.shape)
    # 结果:torch.Size([3])
    print(t1.unsqueeze(dim=0).shape)
    # 结果:torch.Size([1, 3])
    print(t1.unsqueeze(dim=1).shape)
    # 结果:torch.Size([3, 1])
    
    第一个轴(dim=0)
    t1 = torch.tensor([1,1,1])
    t2 = torch.tensor([2,2,2])
    t3 = torch.tensor([3,3,3])
    
    # 拼接
    print(torch.cat((t1,t2,t3),dim=0).shape)
    # 结果:torch.Size([9])
    print(torch.cat((t1,t2,t3),dim=0))
    # 结果:tensor([1, 1, 1, 2, 2, 2, 3, 3, 3])
    
    # 堆叠
    print(torch.stack((t1,t2,t3),dim=0).shape)
    # 结果:torch.Size([3, 3])
    print(torch.stack((t1,t2,t3),dim=0))
    # 结果:tensor([[1, 1, 1],
    #         [2, 2, 2],
    #         [3, 3, 3]])
    
    print(torch.cat((t1.unsqueeze(0),t2.unsqueeze(0),t3.unsqueeze(0)),dim=0))
    # 结果:tensor([[1, 1, 1],
    #         [2, 2, 2],
    #         [3, 3, 3]])
    
    # 可以得出:堆叠 = 解压 + 拼接
    
    第二个轴(dim=1)
    # 由于t1,t2,t3张量的形状为torch.Size([3]),没有第二个轴所以不能拼接只能堆叠
    print(torch.stack((t1,t2,t3),dim=1).shape)
    # 结果:torch.Size([3, 3])
    print(torch.stack((t1,t2,t3),dim=1))
    # 结果:tensor([[1, 2, 3],
    #         [1, 2, 3],
    #         [1, 2, 3]])
    
    print(torch.cat((t1.unsqueeze(1),t2.unsqueeze(1),t3.unsqueeze(1)),dim=1))
    # 结果:tensor([[1, 2, 3],
    #         [1, 2, 3],
    #         [1, 2, 3]])
    

    例如:将三张照片和成一个,分为两种情况:

    • 第一种是只是三张照片(三维[通道,高,宽]),这种情况下使用堆叠,在dim=0处创建批次轴;
    • 第二种情况是有一张照片是四维([批次,通道,宽]),这中情况下使用拼接
3.1.2 tensorflow
import tensorflow as tf

t1 = tf.constant([1,1,1])
t2 = tf.constant([2,2,2])
t3 = tf.constant([3,3,3])

print(tf.concat((t1,t2,t3),axis=0))
# 结果:tf.Tensor([1 1 1 2 2 2 3 3 3], shape=(9,), dtype=int32)
print(tf.stack((t1,t2,t3),axis=0))
# 结果:tf.Tensor(
# [[1 1 1]
#  [2 2 2]
#  [3 3 3]], shape=(3, 3), dtype=int32)

print(tf.concat((tf.expand_dims(t1,0),tf.expand_dims(t2,0),tf.expand_dims(t3,0)),axis=0))
# 结果:tf.Tensor(
# [[1 1 1]
#  [2 2 2]
#  [3 3 3]], shape=(3, 3), dtype=int32)

print(tf.stack((t1,t2,t3),axis=1))
# 结果:tf.Tensor(
# [[1 2 3]
#  [1 2 3]
#  [1 2 3]], shape=(3, 3), dtype=int32)
print(tf.concat((tf.expand_dims(t1,1),tf.expand_dims(t2,1),tf.expand_dims(t3,1)),axis=1))
# 结果:tf.Tensor(
# [[1 2 3]
#  [1 2 3]
#  [1 2 3]], shape=(3, 3), dtype=int32)
3.1.3 numpy
import numpy as np

t1 = np.array([1,1,1])
t2 = np.array([2,2,2])
t3 = np.array([3,3,3])

print(np.concatenate((t1,t2,t3),axis=0))
# 结果:[1 1 1 2 2 2 3 3 3]
print(np.stack((t1,t2,t3),axis=0))
# 结果:[[1 1 1]
#  [2 2 2]
#  [3 3 3]]

print(np.concatenate(
    (
        np.expand_dims(t1, 0),
        np.expand_dims(t2, 0),
        np.expand_dims(t3, 0),
    ),
    axis=0
))
# 结果:[[1 1 1]
#  [2 2 2]
#  [3 3 3]]

print(np.stack(
    (t1,t2,t3),
    axis=1
))
# 结果:[[1 2 3]
#  [1 2 3]
#  [1 2 3]]

print(np.concatenate(
    (
        np.expand_dims(t1, 1),
        np.expand_dims(t2, 1),
        np.expand_dims(t3, 1),
    ),
    axis=1
))
# 结果:[[1 2 3]
#  [1 2 3]
#  [1 2 3]]

4. TenserBoard

  • Tenserboard:可视化CNN在神经网络中的指标。

4.1 下载及运行TensorBoard

# 激活虚拟环境
conda activate pytorch
# 下载tensorboard
pip install tensorboard、
# 查看tensorboard版本
tensorboard --version
# 结果:2.13.0

# 运行tensorboard
tensorboard --logdir=runs
# 结果:Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
#	   TensorBoard 2.13.0 at http://localhost:6006/ (Press CTRL+C to quit)
# 默认情况下,pytorch数据文件被写入本地目录中中存在的运行文件中

无数据,进入http://localhost:6006/显示如下:

在这里插入图片描述

4.2 Pytorch使用TensorBoard

# 从TensorBoard开始(神经网络图和图像)
tb = SummaryWriter()

network = Network()
images ,labels = next(iter(train_loader))


grid = torchvision.utils.make_grid(images)

tb.add_image('images',grid)
tb.add_graph(network,images)        # 能够看到一个图形或者一个网络的可视化在tensorboard中
tb.close()

在这里插入图片描述

在这里插入图片描述

# 训练模型时使用TensorBoard显示数据的变化



network = Network()
train_loader = torch.utils.data.DataLoader(train_set,batch_size=100,shuffle=True)   # shuffle=True表示将顺序打乱
optimizer = optim.Adam(network.parameters(),lr=0.01)


images ,labels = next(iter(train_loader))
grid = torchvision.utils.make_grid(images)

tb = SummaryWriter()
tb.add_image('images',grid)
tb.add_graph(network,images)        # 能够看到一个图形或者一个网络的可视化在tensorboard中


for epoch in range(10):

    total_loss = 0
    total_correct = 0

    for batch in train_loader:
        images,labels = batch

        preds = network(images)
        loss = F.cross_entropy(preds,labels)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_loss += loss.item()
        total_correct += get_num_correct(preds,labels)

    # 折线图
    tb.add_scalar('Loss',total_loss,epoch)
    tb.add_scalar('Number Correct',total_correct,epoch)
    tb.add_scalar('Accuracy',total_correct/len(train_set),epoch)

    # 直方图
    # tb.add_histogram('conv1.bias',network.conv1.bias,epoch)
    # tb.add_histogram('conv1.weight',network.conv1.weight,epoch)
    # tb.add_histogram('conv1.weight.grad',network.conv1.weight.grad,epoch)

    for name, weight in network.named_parameters():
        tb.add_histogram(name,weight,epoch)
        tb.add_histogram(f'{name}.grad',weight.grad, epoch)

    print('epoch:',epoch,'total_correct:',total_correct,'loss:',total_loss)

tb.close()

在这里插入图片描述

在这里插入图片描述

4.3 完善及加强网络

  • 基础部分

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    import numpy as np
    
    import torchvision
    import torchvision.transforms as transforms
    
    torch.set_printoptions(linewidth=120)   # 展示输出选项
    torch.set_grad_enabled(True)    # 默认为True
    
    from torch.utils.tensorboard import SummaryWriter
    
    print(torch.__version__)
    print(torchvision.__version__)
    # 结果:1.12.0+cu116
    # 0.13.0+cu116
    
    def get_num_correct(preds,labels):
        return preds.argmax(dim=1).eq(labels).sum().item()
    
    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # (1) input layer
            t = t
    
            # (2) hidden conv layer
            t = self.conv1(t)
            t = F.relu(t)
            t = F.max_pool2d(t,kernel_size = 2,stride = 2)
    
            # (3) hidden conv layer
            t = self.conv2(t)
            t = F.relu(t)
            t = F.max_pool2d(t, kernel_size=2, stride=2)
    
            # (4) hidden liner layer
            t = t.reshape(-1,12 * 4 * 4)
            t = self.fc1(t)
            t = F.relu(t)
    
            # (5) hidden liner layer
            t = self.fc2(t)
            t = F.relu(t)
    
            # (6) output layer
            t = self.out(t)
            # t = F.softmax(t,dim=1)
    
            return t
    
    train_set = torchvision.datasets.FashionMNIST(
        root = "./data/FashionMNIST",
        train = True,
        download=True,
        transform=transforms.Compose([
            transforms.ToTensor()
        ])
    )
    
  • 训练部分

    from itertools import product
    
    # 加强网络
    parameters = dict(
        lr = [.01,.001],
        batch_size = [10,100,1000],
        shuffle = [True,False]
    )
    
    param_values = [v for v in parameters.values()]
    print(param_values)
    
    
    
    for lr, batch_size, shuffle in product(*param_values):
        print('lr=',lr,'batch_size=',batch_size,'shuffle=',shuffle)
    
        network = Network()
        train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle=True)  # shuffle=True表示将顺序打乱
        optimizer = optim.Adam(network.parameters(), lr=lr)
    
        images, labels = next(iter(train_loader))
        grid = torchvision.utils.make_grid(images)
    
        comment = f'batch_size = {batch_size} lr = {lr}'
        tb = SummaryWriter(comment=comment)
        tb.add_image('images', grid)
        tb.add_graph(network, images)  # 能够看到一个图形或者一个网络的可视化在tensorboard中
    
        for epoch in range(10):
    
            total_loss = 0
            total_correct = 0
    
            for batch in train_loader:
                images, labels = batch
    
                preds = network(images)
                loss = F.cross_entropy(preds, labels)
    
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()
    
                total_loss += loss.item() * batch_size  # 乘以batch_size的目的是区分不同batch_size的损失,如果不乘那么100,和1000的loss都是求平均后的,大小就相近
                total_correct += get_num_correct(preds, labels)
    
            # 折线图
            tb.add_scalar('Loss', total_loss, epoch)
            tb.add_scalar('Number Correct', total_correct, epoch)
            tb.add_scalar('Accuracy', total_correct / len(train_set), epoch)
    
    
            for name, weight in network.named_parameters():
                tb.add_histogram(name, weight, epoch)
                tb.add_histogram(f'{name}.grad', weight.grad, epoch)
    
            print('epoch:', epoch, 'total_correct:', total_correct, 'loss:', total_loss)
    
    tb.close()
    
  • RunBuilder类

    from collections import OrderedDict
    from collections import namedtuple
    from itertools import product
    
    class RunBuilder():
        @staticmethod
        def get_runs(params):
            Run = namedtuple('Run', params.keys())
    
            runs = []
            for v in product(*params.values()):
                runs.append(Run(*v))
    
            return runs
    
    # 测试
    params = OrderedDict(
        lr = [.01,.001],
        batch_size = [1000,10000]
    )
    
    runs = RunBuilder.get_runs(params)
    print(runs)
    # 结果:[Run(lr=0.01, batch_size=1000), Run(lr=0.01, batch_size=10000), Run(lr=0.001, batch_size=1000), Run(lr=0.001, batch_size=10000)]
    
    run= runs[0]
    print(run)
    # 结果:Run(lr=0.01, batch_size=1000)
    
    print(run.lr,run.batch_size)
    # 结果:0.01 1000
    
    for run in runs:
        print(run,run.lr,run.batch_size)
        # 结果:Run(lr=0.01, batch_size=1000) 0.01 1000
        # Run(lr=0.01, batch_size=10000) 0.01 10000
        # Run(lr=0.001, batch_size=1000) 0.001 1000
        # Run(lr=0.001, batch_size=10000) 0.001 10000
    
    params = OrderedDict(
        lr = [.01,.001],
        batch_size = [1000,10000]
    )
    runs = RunBuilder.get_runs(params)
    print(runs)
    # 结果:[Run(lr=0.01, batch_size=1000), Run(lr=0.01, batch_size=10000), Run(lr=0.001, batch_size=1000), Run(lr=0.001, batch_size=10000)]
    
    params = OrderedDict(
        lr = [.01,.001],
        batch_size = [1000,10000],
        device = ["cuda","cpu"]
    )
    runs = RunBuilder.get_runs(params)
    print(runs)
    # 结果:[Run(lr=0.01, batch_size=1000, device='cuda'),
    # Run(lr=0.01, batch_size=1000, device='cpu'),
    # Run(lr=0.01, batch_size=10000, device='cuda'),
    # Run(lr=0.01, batch_size=10000, device='cpu'),
    # Run(lr=0.001, batch_size=1000, device='cuda'),
    # Run(lr=0.001, batch_size=1000, device='cpu'),
    # Run(lr=0.001, batch_size=10000, device='cuda'),
    # Run(lr=0.001, batch_size=10000, device='cpu')]
    
    • Before
      for lr, batch_size, shuffle in product(*param_values):
          print('lr=',lr,'batch_size=',batch_size,'shuffle=',shuffle)
          comment = f'batch_size = {batch_size} lr = {lr}'
      
    • After
      for run in RunBuilder.get_runs(params):
          print(run)
          comment = f'-{run}'
      
  • RunManager类

    import time
    import torch
    import pandas as pd
    import torchvision.utils
    from torch.utils.tensorboard import SummaryWriter
    
    
    class RunManager():
    
        def __init__(self):
    
            self.epoch_count = 0
            self.epoch_loss = 0
            self.epoch_num_correct = 0
            self.epoch_start_time = 0
    
            self.run_params =None
            self.run_count = 0
            self.run_data = []
            self.run_start_time = None
    
            self.network = None
            self.loader = None
            self.tb = None
    
        def begin_run(self,run,network,loader):
    
            self.run_start_time = time.time()
    
            self.run_params = run
            self.run_count += 1
    
            self.network = network
            self.loader = loader
            self.tb = SummaryWriter(comment= f'-{run}')
    
            images,labels = next(iter(self.loader))
            grid = torchvision.utils.make_grid(images)
    
            self.tb.add_image('images',grid)
            self.tb.add_graph(self.network,images)
    
        def end_run(self):
            self.tb.close()
            self.epoch_count = 0
    
        def begin_epoch(self):
            self.epoch_start_time = time.time()
    
            self.epoch_count += 1
            self.epoch_loss = 0
            self.epoch_num_correct = 0
    
        def end_epoch(self):
    
            epoch_duration = time.time() - self.epoch_start_time
            run_duration = time.time() - self.run_start_time
    
            loss = self.epoch_loss/len(self.loader.dataset)
            accuracy = self.epoch_num_correct / len(self.loader.dataset)
    
            # 折线图
            self.tb.add_scalar('Loss', loss, self.epoch_count)
            self.tb.add_scalar('Accuracy', accuracy, self.epoch_count)
    
            for name, weight in self.network.named_parameters():
                self.tb.add_histogram(name, weight, self.epoch_count)
                self.tb.add_histogram(f'{name}.grad', weight.grad, self.epoch_count)
    
            results = OrderedDict()
            results["run"] = self.run_count
            results["epoch"] = self.epoch_count
            results["loss"] = loss
            results["epoch duration"] = epoch_duration
            results["run duration"] = run_duration
            for k,v in self.run_params._asdict().items():results[k] = v
            self.run_data.append(results)
            df = pd.DataFrame.from_dict(self.run_data,orient='columns')
    
            # clear_output(wait=Tre)
            # display(df)
    
        def track_loss(self,loss):
            self.epoch_loss += loss.item() * self.loader.batch_size
    
        def track_num_correct(self,preds,labels):
            self.epoch_num_correct += self._get_num_correct(preds,labels)
    
        @torch.no_grad()
        def _get_num_correct(self,preds,labels):
            return preds.argmax(dim=1).eq(labels).sum().item()
    
        def save(self,fileName):
    
            pd.DataFrame.from_dict(
                self.run_data,
                orient='columns'
            ).to_csv(f'{fileName}.csv')
    
            with open(f'{fileName}.json','w',encoding='utf-8') as f:
                json.dump(self.run_data,f,ensure_ascii=False,indent=4)
    
  • 成品

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    import numpy as np
    import torchvision
    import torchvision.transforms as transforms
    from RunUtils import RunBuilder,RunManager
    from collections import OrderedDict
    
    torch.set_printoptions(linewidth=120)   # 展示输出选项
    torch.set_grad_enabled(True)    # 默认为True
    
    
    
    
    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # (1) input layer
            t = t
    
            # (2) hidden conv layer
            t = self.conv1(t)
            t = F.relu(t)
            t = F.max_pool2d(t,kernel_size = 2,stride = 2)
    
            # (3) hidden conv layer
            t = self.conv2(t)
            t = F.relu(t)
            t = F.max_pool2d(t, kernel_size=2, stride=2)
    
            # (4) hidden liner layer
            t = t.reshape(-1,12 * 4 * 4)
            t = self.fc1(t)
            t = F.relu(t)
    
            # (5) hidden liner layer
            t = self.fc2(t)
            t = F.relu(t)
    
            # (6) output layer
            t = self.out(t)
            # t = F.softmax(t,dim=1)
    
            return t
    
    train_set = torchvision.datasets.FashionMNIST(
        root = "./data/FashionMNIST",
        train = True,
        download=True,
        transform=transforms.Compose([
            transforms.ToTensor()
        ])
    )
    
    params = OrderedDict(
        lr = [.01],
        batch_size = [1000,2000]
    )
    
    m = RunManager()
    
    for run in RunBuilder.get_runs(params):
    
        network = Network
        loader = torch.utils.data.DataLoader(train_set,batch_size=run.batch_size)
        optimizer = optim.Adam(network.parameters(),lr=run.lr)
    
        m.begin_run(run,network,loader)
        for epoch in range(10):
            m.begin_epoch()
            for batch in loader:
    
                images,labels= batch
                preds = network(images)
                loss = F.cross_entropy(preds,labels)
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()
    
                m.track_loss()
                m.track_num_correct()
            m.end_epoch()
        m.end_run()
    m.save('results')
    
    

4.4. 利用神经网络的多进程能力(加速训练)

  • Before
    params = OrderedDict(
        lr = [.01],
        batch_size = [1000,2000]
    )
    # ··· ···
    loader = torch.utils.data.DataLoader(train_set,batch_size=run.batch_size)
    
  • After
    params = OrderedDict(
        lr = [.01],
        batch_size = [10,1000,2000],
        num_workers = [0,1,2,4,8,16]
    )
    # ··· ···
    loader = torch.utils.data.DataLoader(train_set,batch_size=run.batch_size,num_workers=run.num_workers)
    

5.GPU和CPU的转换

5.1 移动到GPU

t = torch.ones(1,1,28,28)
network = Network()

t = t.cuda()
network = network.cuda()

gpu_pred = network(t)
print(gpu_pred.device)
# 结果:cuda:0

5.2 移动到CPU

t = t.cpu()
network = network.cpu()

cpu_pred = network(t)
print(cpu_pred.device)
# 结果:cpu

5.3 使用张量运行

t1 = torch.tensor([
    [1,2],
    [3,4]
])

t2 = torch.tensor([
    [5,6],
    [7,8]
])

print(t1.device)
# 结果:cpu
print(t2.device)
# 结果:cpu

t1 = t1.to('cuda')
print(t1.device)
# 结果:cpu

try: t1+t2
except Exception as e:print(e)
# 结果:Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

try: t2+t1
except Exception as e:print(e)
# 结果:Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!


t2 = t2.to('cuda')
print(t1+t2)
# 结果:tensor([[ 6,  8],
#         [10, 12]], device='cuda:0')

5.4 使用神经网络模型运行

network = Network()

for name,param in network.named_parameters():
    print(name,'\t\t\t',param.shape)
    # 结果:conv1.weight 			 torch.Size([6, 1, 5, 5])
    # conv1.bias 			 torch.Size([6])
    # conv2.weight 			 torch.Size([12, 6, 5, 5])
    # conv2.bias 			 torch.Size([12])
    # fc1.weight 			 torch.Size([120, 192])
    # fc1.bias 			 torch.Size([120])
    # fc2.weight 			 torch.Size([60, 120])
    # fc2.bias 			 torch.Size([60])
    # out.weight 			 torch.Size([10, 60])
    # out.bias 			 torch.Size([10])

for n,p in network.named_parameters():
    print(p.device,'',n)
    # 结果:cpu  conv1.weight
    # cpu  conv1.bias
    # cpu  conv2.weight
    # cpu  conv2.bias
    # cpu  fc1.weight
    # cpu  fc1.bias
    # cpu  fc2.weight
    # cpu  fc2.bias
    # cpu  out.weight
    # cpu  out.bias

network.to('cuda')

for n,p in network.named_parameters():
    print(p.device,'',n)
    # 结果:cuda:0  conv1.weight
    # cuda:0  conv1.bias
    # cuda:0  conv2.weight
    # cuda:0  conv2.bias
    # cuda:0  fc1.weight
    # cuda:0  fc1.bias
    # cuda:0  fc2.weight
    # cuda:0  fc2.bias
    # cuda:0  out.weight
    # cuda:0  out.bias

sample = torch.ones(1,1,28,28)
print(sample.shape)
# 结果:torch.Size([1, 1, 28, 28])

try:network(sample)
except Exception as e: print(e)
# 结果:Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

try:
    pred = network(sample.to('cuda'))
    print(pred)
except Exception as e:
    print(e)
# 结果:tensor([[-0.0247, -0.0666,  0.0730, -0.1417, -0.1030, -0.1102,  0.0610, -0.1137,
#          -0.0815,  0.1488]], device='cuda:0', grad_fn=<AddmmBackward0>)

5.5 查看CUDA信息

# cuda是否可用
print(torch.cuda.is_available())
# 结果:True

# cuda数量
print(torch.cuda.device_count())
# 结果:1

# 查看当前使用的cuda编号
print(torch.cuda.current_device())
# 结果:0

# 查看GPU设备名字
print(torch.cuda.get_device_name())
# 结果:NVIDIA GeForce RTX 3060 Laptop GPU

# 查看设备容量
print(torch.cuda.get_device_capability(0))
# 结果:(8, 6)

5.6 使用GPU:测试

  • 修改RunManager类
    class RunManager():
    
        def __init__(self):
    
            self.epoch_count = 0
            self.epoch_loss = 0
            self.epoch_num_correct = 0
            self.epoch_start_time = 0
    
            self.run_params =None
            self.run_count = 0
            self.run_data = []
            self.run_start_time = None
    
            self.network = None
            self.loader = None
            self.tb = None
        def begin_run(self,run,network,loader):
    
            self.run_start_time = time.time()
    
            self.run_params = run
            self.run_count += 1
    
            self.network = network
            self.loader = loader
            self.tb = SummaryWriter(comment= f'-{run}')
    
            images,labels = next(iter(self.loader))
            grid = torchvision.utils.make_grid(images)
    
            self.tb.add_image('images',grid)
            
            # 在这里进行了修改!!!
            self.tb.add_graph(self.network,
                              images.to(getattr(run,'device','cpu')))
    
        def end_run(self):
            self.tb.close()
            self.epoch_count = 0
    
        def begin_epoch(self):
            self.epoch_start_time = time.time()
    
            self.epoch_count += 1
            self.epoch_loss = 0
            self.epoch_num_correct = 0
    
        def end_epoch(self):
    
            epoch_duration = time.time() - self.epoch_start_time
            run_duration = time.time() - self.run_start_time
    
            loss = self.epoch_loss/len(self.loader.dataset)
            accuracy = self.epoch_num_correct / len(self.loader.dataset)
    
            # 折线图
            self.tb.add_scalar('Loss', loss, self.epoch_count)
            self.tb.add_scalar('Accuracy', accuracy, self.epoch_count)
    
            for name, weight in self.network.named_parameters():
                self.tb.add_histogram(name, weight, self.epoch_count)
                self.tb.add_histogram(f'{name}.grad', weight.grad, self.epoch_count)
    
            results = OrderedDict()
            results["run"] = self.run_count
            results["epoch"] = self.epoch_count
            results["loss"] = loss
            results["epoch duration"] = epoch_duration
            results["run duration"] = run_duration
            # params参数中若有其他的键值对,下列for循环是将其他自定义键值对加入results中
            for k,v in self.run_params._asdict().items():results[k] = v
            self.run_data.append(results)
            df = pd.DataFrame.from_dict(self.run_data,orient='columns')
            # 根据epoch_duration从小到大排序
            # df = pd.DataFrame.from_dict(self.run_data,orient='columns').sort_values('epoch_duration')
            
            # pycharm使用
            os.system('cls')  # 清空Python控制台
            print(df)
    
            # jupyter使用
            # clear_output(wait=True)
            # display(df)
    
        def track_loss(self,loss):
            self.epoch_loss += loss.item() * self.loader.batch_size
    
        def track_num_correct(self,preds,labels):
            self.epoch_num_correct += self._get_num_correct(preds,labels)
    
        @torch.no_grad()
        def _get_num_correct(self,preds,labels):
            return preds.argmax(dim=1).eq(labels).sum().item()
    
        def save(self,fileName):
    
            pd.DataFrame.from_dict(
                self.run_data,
                orient='columns'
            ).to_csv(f'{fileName}.csv')
    
            with open(f'{fileName}.json','w',encoding='utf-8') as f:
                json.dump(self.run_data,f,ensure_ascii=False,indent=4)
    
  • 修改训练代码
    params = OrderedDict(
        lr = [.01],
        batch_size = [1000,10000,20000],
        num_workers = [0,1],
        device = ['cuda','cpu']
    )
    m = RunManager()
    for run in RunBuilder.get_runs(params):
    
        device = torch.device(run.device)
        network = Network().to(device)
        loader = torch.utils.data.DataLoader(train_set,batch_size=run.batch_size,num_workers = run.num_workers)
        optimizer = optim.Adam(network.parameters(),lr=run.lr)
    
        m.begin_run(run,network,loader)
        for epoch in range(5):
            m.begin_epoch()
            for batch in loader:
    
                images = batch[0].to(device)
                labels = batch[1].to(device)
                preds = network(images)
                loss = F.cross_entropy(preds,labels)
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()
    
                m.track_loss(loss)
                m.track_num_correct(preds,labels)
            m.end_epoch()
        m.end_run()
    m.save('results')
    

6. 数据规范化(data normalization)

6.1 理解

  • 数据规范化:指的是行为获取数据集中的每个样本将值转换为新的集合。

  • 归一化(Normalization):将一列数据变化到某个固定区间(范围)中,通常,这个区间是[0, 1],广义的讲,可以是各种区间,比如映射到[0,1]一样可以继续映射到其他范围,图像中可能会映射到[0,255],其他情况可能映射到[-1,1]。
    X i − X m i n X m a x − X m i n \frac{X_i-X_{min}}{X_{max}-X_{min}} XmaxXminXiXmin

  • 标准化(Standardization):将数据变换为均值为0,标准差为1的分布,变换后依然保留原数据分布。
    X i − μ σ \frac{X_i-\mu}{\sigma} σXiμ

  • 两者的联系和差异

    • 标准化和归一化本质上都是不改变数据顺序的情况下对数据的线性变化,而它们最大的不同是归一化(Normalization)会将原始数据规定在一个范围区间中,而标准化(Standardization)则是将数据调整为标准差为1,均值为0对分布。
    • 另外归一化只与数据的最大值和最小值有关,缩放比例为α=Xmax−Xmin,而标准化的缩放比例等于标准差α=σ,平移量等于均值β=μ,当除了极大值极小值的数据改变时他的缩放比例和平移量也会改变。
  • 什么时候需要使用标准化和归一化

    • 首先标准化和归一化的本质都是数据的特征缩放,而基于数据距离的算法依赖于特征缩放,例如聚类、基于距离的分类算法(KNN、逻辑回归、SVM)、线性回归等;而不基于距离的算法,大部分是分类器不需要特征缩放。
      1. 统计建模中,如回归模型,自变量X XX的量纲不一致导致了回归系数无法直接解读或者错误解读;需要将X XX都处理到统一量纲下,这样才可比;
      2. 机器学习任务和统计学任务中有很多地方要用到“距离”的计算,比如PCA,比如KNN,比如kmeans等等,假使算欧式距离,不同维度量纲不同可能会导致距离的计算依赖于量纲较大的那些特征而得到不合理的结果;
      3. 参数估计时使用梯度下降,在使用梯度下降的方法求解最优化问题时, 归一化/标准化后可以加快梯度下降的求解速度,即提升模型的收敛速度。
    • 如果对数据范围有严格的要求,使用归一化。
    • ML领域更常使用标准化,如果数据不为稳定,存在极端的最大最小值,不要用归一化。在分类、聚类算法中,需要使用距离来度量相似性的时候、或者使用PCA技术进行降维的时候,标准化表现更好;在不涉及距离度量、协方差计算的时候,可以使用归一化方法。
  • 批归一化(Batch Normalization):就是对每一批数据进行归一化

    • BN的好处
      1. 加快训练速度
      2. 可以省去dropout,L1, L2等正则化处理方法
      3. 提高模型训练精度

6.2 数据集标准化

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms

from torch.utils.data import DataLoader
import matplotlib.pyplot as plt



train_set = torchvision.datasets.FashionMNIST(
    root = "./data/FashionMNIST",
    train = True,
    download=True,
    transform=transforms.Compose([
        transforms.ToTensor()
        # normalize 归一化
    ])
)
  • 求取平均值和方差

    • 简单实现

      # 简单方式
      loader = DataLoader(train_set,batch_size=len(train_set),num_workers=0)
      data = next(iter(loader))
      print("平均值:",data[0].mean(),"方差:",data[0].std())
      # 结果:平均值: tensor(0.2860) 方差: tensor(0.3530)
      
    • 复杂实现

      # 困难方式
      loader = DataLoader(train_set,batch_size=1000,num_workers=0)
      num_of_pixels = len(train_set) * 28 * 28
      
      total_sum = 0
      for batch in loader: total_sum += batch[0].sum()
      mean = total_sum / num_of_pixels
      
      sum_of_squared_error = 0
      for batch in loader: sum_of_squared_error += ((batch[0]-mean).pow(2)).sum()
      std = torch.sqrt(sum_of_squared_error/num_of_pixels)
      
      print("平均值:",data[0].mean(),"方差:",data[0].std())
      # 结果:平均值: tensor(0.2860) 方差: tensor(0.3530)
      
    • 绘制值

      # Plotting the values,注意容易崩溃
      plt.hist(data[0].flatten())
      plt.axvline(data[0].mean())
      plt.show()
      
    • 使用平均值和方差

      train_set = torchvision.datasets.FashionMNIST(
          root = "./data/FashionMNIST",
          train = True,
          download=True,
          transform=transforms.Compose([
              transforms.ToTensor(),
              # 将我们的数据标准化
              transforms.Normalize(mean,std)
          ])
      )
      loader = DataLoader(train_set,batch_size=len(train_set),num_workers=0)
      data = next(iter(loader))
      print("平均值:",data[0].mean(),"方差:",data[0].std())
      # 结果:平均值: tensor(-1.8842e-07) 方差: tensor(1.)
      
  • 标准化和非标准化速度对比

    from model import Network
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torch.optim as optim
    import numpy as np
    from torch.utils.data import DataLoader
    
    import torchvision
    import torchvision.transforms as transforms
    from collections import OrderedDict
    from RunUtils import RunBuilder,RunManager
    
    
    train_set = torchvision.datasets.FashionMNIST(
        root = "G:\developer_tools\Python\Project\Learning\data\FashionMNIST",
        train = True,
        download=False,
        transform=transforms.Compose([
            transforms.ToTensor()
        ])
    )
    loader = DataLoader(train_set,batch_size=len(train_set),num_workers=0)
    data = next(iter(loader))
    mean = data[0].mean()
    std = data[0].std()
    
    train_set_normal = torchvision.datasets.FashionMNIST(
        root = "G:\developer_tools\Python\Project\Learning\data\FashionMNIST",
        train = True,
        download=False,
        transform=transforms.Compose([
            transforms.ToTensor(),
            transforms.Normalize(mean, std)
        ])
    )
    
    trainsets = {
        'not_normal':train_set,
        'normal':train_set_normal
    }
    
    params = OrderedDict(
        lr = [.01],
        batch_size = [1000,10000,20000],
        num_workers = [0],
        device = ['cuda'],
        trainset = ['not_normal','normal']
    )
    m = RunManager()
    for run in RunBuilder.get_runs(params):
    
        device = torch.device(run.device)
        network = Network().to(device)
        loader = torch.utils.data.DataLoader(trainsets[run.trainset],batch_size=run.batch_size,num_workers = run.num_workers)
        optimizer = optim.Adam(network.parameters(),lr=run.lr)
    
        m.begin_run(run,network,loader)
        for epoch in range(1):
            m.begin_epoch()
            for batch in loader:
    
                images = batch[0].to(device)
                labels = batch[1].to(device)
                preds = network(images)
                loss = F.cross_entropy(preds,labels)
                optimizer.zero_grad()
                loss.backward()
                optimizer.step()
    
                m.track_loss(loss)
                m.track_num_correct(preds,labels)
            m.end_epoch()
        m.end_run()
    m.save('results')
    

    在这里插入图片描述

7. 建立顺序神经网络

  • 获取神经网络需要的参数

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from torch.utils.data import DataLoader

import torchvision
import torchvision.transforms as transforms
from collections import OrderedDict
from RunUtils import RunBuilder,RunManager
import matplotlib.pyplot as plt
import math

torch.set_printoptions(linewidth=150)

train_set = torchvision.datasets.FashionMNIST(
    root = "G:\developer_tools\Python\Project\Learning\data\FashionMNIST",
    train = True,
    download=False,
    transform=transforms.Compose([
        transforms.ToTensor()
    ])
)

image ,label = train_set[0]
print(image.shape)
# 结果:torch.Size([1, 28, 28])

plt.imshow(image.squeeze(),cmap='gray')
plt.show()

print(train_set.classes)
# 结果:['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

in_features = image.numel()
print(in_features)
# 结果:784

out_features = math.floor(in_features / 2)
print(out_features)
# 结果:392

out_classes = len(train_set.classes)
print(out_classes)
# 结果:10
  • 方式一

    network1 = nn.Sequential(
        nn.Flatten(start_dim=1),
        nn.Linear(in_features,out_features),
        nn.Linear(out_features,out_classes)
    )
    
    print(network1)
    # 结果:Sequential(
    #   (0): Flatten(start_dim=1, end_dim=-1)
    #   (1): Linear(in_features=784, out_features=392, bias=True)
    #   (2): Linear(in_features=392, out_features=10, bias=True)
    # )
    
    image = image.unsqueeze(0)
    print(image.shape)
    # 结果:torch.Size([1, 1, 28, 28])
    
    print(network1(image))
    # 结果:tensor([[-0.2623,  0.1098,  0.2696, -0.0266,  0.0764, -0.0861, -0.2789,  0.2457,  0.0557,  0.1501]], grad_fn=<AddmmBackward0>)
    
  • 方式二

    layers = OrderedDict([
        ('flat',nn.Flatten(start_dim=1)),
        ('hidden',nn.Linear(in_features,out_features)),
        ('output',nn.Linear(out_features,out_classes)),
    ])
    
    network2 = nn.Sequential(layers)
    print(network2)
    # 结果:Sequential(
    #   (flat): Flatten(start_dim=1, end_dim=-1)
    #   (hidden): Linear(in_features=784, out_features=392, bias=True)
    #   (output): Linear(in_features=392, out_features=10, bias=True)
    # )
    
    print(network2(image))
    # 结果:tensor([[-0.2469, -0.1905,  0.1655, -0.0973,  0.0546, -0.2897,  0.3867, -0.0541,  0.0536, -0.0903]], grad_fn=<AddmmBackward0>) 
    
  • 方式三

    network3 = nn.Sequential()
    network3.add_module('flat',nn.Flatten(start_dim=1))
    network3.add_module('hidden',nn.Linear(in_features,out_features))
    network3.add_module('output',nn.Linear(out_features,out_classes))
    print(network3)
    # 结果:Sequential(
    #   (flat): Flatten(start_dim=1, end_dim=-1)
    #   (hidden): Linear(in_features=784, out_features=392, bias=True)
    #   (output): Linear(in_features=392, out_features=10, bias=True)
    # )
    print(network3(image))
    # 结果:tensor([[ 0.0050,  0.2166, -0.0427,  0.3739, -0.0101, -0.0700, -0.0909,  0.1259, -0.0730, -0.2314]], grad_fn=<AddmmBackward0>)
    
  • 设置CPU生成随机数的种子

    # 通过设置随机种子使两次随机权重的随机值相同
    torch.manual_seed(50)
    network1 = nn.Sequential(
        nn.Flatten(start_dim=1),
        nn.Linear(in_features,out_features),
        nn.Linear(out_features,out_classes)
    )
    
    torch.manual_seed(50)
    layers = OrderedDict([
        ('flat',nn.Flatten(start_dim=1)),
        ('hidden',nn.Linear(in_features,out_features)),
        ('output',nn.Linear(out_features,out_classes)),
    ])
    network2 = nn.Sequential(layers)
    
    torch.manual_seed(50)
    network3 = nn.Sequential()
    network3.add_module('flat',nn.Flatten(start_dim=1))
    network3.add_module('hidden',nn.Linear(in_features,out_features))
    network3.add_module('output',nn.Linear(out_features,out_classes))
    
    print("network1:",network1(image),"network2:",network2(image),"network3:",network3(image))
    # 结果:network1: tensor([[ 0.1681,  0.1028, -0.0790, -0.0659, -0.2436,  0.1328, -0.0864,  0.0016,  0.1819, -0.0168]], grad_fn=<AddmmBackward0>) 
    # network2: tensor([[ 0.1681,  0.1028, -0.0790, -0.0659, -0.2436,  0.1328, -0.0864,  0.0016,  0.1819, -0.0168]], grad_fn=<AddmmBackward0>) 
    # network3: tensor([[ 0.1681,  0.1028, -0.0790, -0.0659, -0.2436,  0.1328, -0.0864,  0.0016,  0.1819, -0.0168]], grad_fn=<AddmmBackward0>)
    
  • 建立网络类

    class Network(nn.Module):
        def __init__(self):
            super(Network, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
            self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
    
            self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
            self.fc2 = nn.Linear(in_features=120,out_features=60)
            self.out = nn.Linear(in_features=60,out_features=10)
    
        def forward(self, t):
            # (1) input layer
            t = t
    
            # (2) hidden conv layer
            t = self.conv1(t)
            t = F.relu(t)
            t = F.max_pool2d(t,kernel_size = 2,stride = 2)
    
            # (3) hidden conv layer
            t = self.conv2(t)
            t = F.relu(t)
            t = F.max_pool2d(t, kernel_size=2, stride=2)
    
            # (4) hidden liner layer
            t = t.reshape(-1,12 * 4 * 4)
            t = self.fc1(t)
            t = F.relu(t)
    
            # (5) hidden liner layer
            t = self.fc2(t)
            t = F.relu(t)
    
            # (6) output layer
            t = self.out(t)
            # t = F.softmax(t,dim=1)
    
            return t
    

8.Batch Norm与Layer Norm理解

  • BatchNorm:

    • 理解:同一个batch中不同样本的同一特征求标准正态分布。假设R,G,B各算一个特征,分别对同一个batch中不同样本的R求标准正态分布,G求标准正态分布以及B求标准正态分布。即同一通道上进行归一化。

    • 作用:BatchNorm 是一种神经网络层,在许多架构中都普遍使用。 通常作为线性或卷积的一部分添加,有助于在训练期间稳定网络。

    • Internal Covariate Shift:训练深度网络的时候经常发生训练困难的问题,因为,每一次参数迭代更新后,上一层网络的输出数据经过这一层网络计算后,数据的分布会发生变化,为下一层网络的学习带来困难

    • Covariate Shift:主要描述的是由于训练数据和测试数据存在分布的差异性,给网络的泛化性和训练速度带来了影响,我们经常使用的方法是做归一化或者白化。

    • BatchNorm解决的问题:为了减小Internal Covariate Shift,对神经网络的每一层做归一化不就可以了,假设将每一层输出后的数据都归一化到0均值,1方差,满足正太分布,但是,此时有一个问题,每一层的数据分布都是标准正太分布,导致其完全学习不到输入数据的特征,因为,费劲心思学习到的特征分布被归一化了,因此,直接对每一层做归一化显然是不合理的。

    • BatchNorm步骤:

      • 先求出此次批量数据x的均值

      μ b = 1 m Σ i = 1 m x i \mu_{b}=\frac{1}{m}\Sigma_{i=1}^mx_i μb=m1Σi=1mxi

      • 求出此次batch的方差

      σ β 2 = 1 m Σ i = 1 m ( x i − μ β ) 2 \sigma_{\beta}^2=\frac{1}{m}\Sigma_{i=1}m(x_i-\mu_{\beta})^2 σβ2=m1Σi=1m(xiμβ)2

      • 接下来就是对x做标准化,

      得到 x i − 得到x_i^- 得到xi

      • 最重要的一步,引入缩放和平移变量γβ,计算归一化后的值

      KaTeX parse error: Expected 'EOF', got '&' at position 29: …beta\\ γ:weight&̲of&BatchNorm\\ …

      • 接下来详细介绍一下这额外的两个参数,之前也说过如果直接做归一化不做其他处理,神经网络是学不到任何东西的,但是加入这两个参数后,事情就不一样了,先考虑特殊情况下,如果γβ分别等于此batch的方差和均值,那么y_i不就还原到归一化前的x了吗,也即是缩放平移到了归一化前的分布,相当于batchnorm没有起作用,γβ分别称之为 平移参数和缩放参数 。这样就保证了每一次数据经过归一化后还保留的有学习来的特征,同时又能完成归一化这个操作,加速训练。
  • LayerNorm:

    • 理解:同一batch中同一样本的不同特征求标准正态分布。假设R,G,B各算一个特征,对同一个batch中同一样本的R、G、B求标准正态分布。即不同通道上进行归一化

9. 保存文件

9.1 保存加载自定义模型

  • 保存参数和模型:这种方式直接保存加载整个网络结构,比较死板,不能调整网络结构。
device = ['cuda:0' if torch.cuda.is_available() else 'cpu'][0]
'''模型保存'''
torch.save(model, 'model.pkl')
'''模型加载'''
model = torch.load('model.pkl', map_location=device)
  • 仅保存参数:这种方式需要自己先定义网络模型的结构才能加载模型的参数,并且定义的网络模型的参数名称和结构要与加载的模型一致(可以是部分网络,比如只使用神经网络的前几层),相对灵活,便于对网络进行修改。
device = ['cuda:0' if torch.cuda.is_available() else 'cpu'][0]
# 定义网络
model = nn.RNN() # 举例,一个RNN类,定义了RNN模型的结构
'''模型参数保存'''
torch.save(model.state_dict(), 'model_param.pkl')

# 加载模型参数到模型结构
model.load_state_dict(torch.load('model_param.pkl'))

9.2 加载预训练模型

9.2.1 预训练模型网络结构 == 自定义模型网络结构
device = ['cuda:0' if torch.cuda.is_available() else 'cpu'][0]
path="预训练模型地址"
model = CJK_MODEL()
# 加载参数
model = model.load_state_dict(torch.load(path))
9.2.2 预训练模型网络结构与自定义模型网络结构不一致

1.首先打印出两个网络模型的各层网络名称

	device = ['cuda:0' if torch.cuda.is_available() else 'cpu'][0]
	'''输出自定义模型的各层网络结构名称'''
	model_dict = model.state_dict()
	print(model_dict.keys())
	'''输出自定义模型的各层网络结构名称'''
	checkpoint = torch.load('./model_param.pkl')
	for k, v in checkpoint.items():
		print("keys:",k)

2.对比两者网络结构参数,如果差距太大就没有借用的必要了

  • 如果许多参数层的名称完全一致:
	model.load_state_dict(checkpoint, strict=True)
	'''
	load_state_dict 函数添加参数 strict=True,
	它直接忽略那些没有的dict,有相同的就复制,没有就直接放弃赋值. 
	他要求预训练模型的关键字必须确切地严格地和
	自定义网络的state_dict()函数返回的关键字相匹配才能赋值。
	'''
  • 如果许多参数层的名称大部分一致:

    比如自定义网络模型中参数层名称为backbone.stage0.rbr_dense.conv.weight,

    预训练模型中参数层名称为stage0.rbr_dense.conv.weight,可以看到二者大部分是一致的.这种情况下,可以把 预训练模型的stage0.rbr_dense.conv.weight读入网络的backbone.stage0.rbr_dense.conv.weight 中。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值