Pytorch官方引导01-QUICKSART

Pytorch官方引导01-QUICKSART

目前在做一个深度学习加速的项目,基础知识还不太过关,于是先从炼丹开始学起,貌似使用jupyter和markdown能够一边跑演示代码一边记笔记,正好还可以上传到CSDN便于复习,感觉这对刚上手做深度学习的人来说还是非常有帮助的,于是从现在开始尝试一下,看效果如何。

Working with data

数据工作常用两个源语:

  1. Dataset----------包括 样本samples 和 标签labels
  2. DataLoader-----在Dataset基础上包装一个可迭代对象
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt

Pytorch提供了包括TrochText, TorchVison, TorchAudio在内的许多库,
此处我们选用TorchVison(正如代码块3中第4行)。

torchvison.datasets模块包含许多真实世界的视觉数据,如CIFAR, COCO等,此处我们选用FashionMNIST数据集。而每个Dataset中包含着两个参数,即transform和target_transform,分别用于修改样本和标签。

以下是数据下载代码:

# Download training data from open datasets.
training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor(),
)

# Download test data from open datasets.
test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor(),
)
C:\Users\yangtao2019.DESKTOP-MERE1TO\anaconda3\lib\site-packages\torchvision\datasets\mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  ..\torch\csrc\utils\tensor_numpy.cpp:180.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)

将Dataset作为一个参数传递给DataLoader,来生成一个支持批处理、采样、变换和多进程数据加载的可迭代对象。此处定义批处理大小,即batch_size=64。

batch_size = 64

# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

for X, y in test_dataloader:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break
Shape of X [N, C, H, W]:  torch.Size([64, 1, 28, 28])
Shape of y:  torch.Size([64]) torch.int64

Creating Models

在Pytorch中定义一个神经网络,我们需要创建一个继承自nn.Module的类。
主要注意如下三点:

  1. 在__init__函数中定义网络层的功能;
  2. 在forwrad函数中规定数据如何通过网络(前馈过程);
  3. 转移到GPU上运行以便加速。
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))

# Define model
class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
            nn.ReLU()
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork().to(device)
print(model)
Using cuda device
NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
    (5): ReLU()
  )
)

*注:

  • super函数是继承时必须调用的一个函数,用法是 super(父类, self).父类的某个方法(),具体参见 Python super() 函数
  • 所谓“线性relu栈”是否即指全连接层?

Optimizing the Model Parameters

训练模型需要:

  1. 损失函数loss function
  2. 优化器optimizer(管理BP过程)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

接下来定义训练过程的函数。在每次循环中,模型基于训练集做预测,
然后通过反向传播预测误差来调整参数。

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    for batch, (X, y) in enumerate(dataloader):
        X, y = X.to(device), y.to(device)

        # Compute prediction error
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")

接下来定义测试过程函数。通过测试集数据来检验模型的学习表现。

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

整个训练过程是在许多轮迭代(称之epochs)之上开展的,每个epoch包括前馈和反向传播过程。接下来我们打印出模型每个epoch的准确率和损失函数来直观地观察随迭代次数增加,accuracy的逐渐增加和loss function逐渐减少的过程。

epochs = 10
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train(train_dataloader, model, loss_fn, optimizer)
    test(test_dataloader, model, loss_fn)
print("Done!")
Epoch 1
-------------------------------
loss: 2.301066  [    0/60000]
loss: 2.295875  [ 6400/60000]
loss: 2.291058  [12800/60000]
loss: 2.288681  [19200/60000]
loss: 2.272175  [25600/60000]
loss: 2.280452  [32000/60000]
loss: 2.272982  [38400/60000]
loss: 2.263768  [44800/60000]
loss: 2.253188  [51200/60000]
loss: 2.268080  [57600/60000]
Test Error: 
 Accuracy: 44.0%, Avg loss: 2.240173 

Epoch 2
-------------------------------
loss: 2.226811  [    0/60000]
loss: 2.207108  [ 6400/60000]
loss: 2.194875  [12800/60000]
loss: 2.230848  [19200/60000]
loss: 2.163202  [25600/60000]
loss: 2.198411  [32000/60000]
loss: 2.201189  [38400/60000]
loss: 2.173105  [44800/60000]
loss: 2.175009  [51200/60000]
loss: 2.211906  [57600/60000]
Test Error: 
 Accuracy: 43.1%, Avg loss: 2.146921 

Epoch 3
-------------------------------
loss: 2.119274  [    0/60000]
loss: 2.075378  [ 6400/60000]
loss: 2.054407  [12800/60000]
loss: 2.150991  [19200/60000]
loss: 2.004458  [25600/60000]
loss: 2.079889  [32000/60000]
loss: 2.102401  [38400/60000]
loss: 2.046195  [44800/60000]
loss: 2.078096  [51200/60000]
loss: 2.145813  [57600/60000]
Test Error: 
 Accuracy: 42.0%, Avg loss: 2.027801 

Epoch 4
-------------------------------
loss: 1.982665  [    0/60000]
loss: 1.909970  [ 6400/60000]
loss: 1.886990  [12800/60000]
loss: 2.058913  [19200/60000]
loss: 1.833675  [25600/60000]
loss: 1.960199  [32000/60000]
loss: 2.007679  [38400/60000]
loss: 1.931268  [44800/60000]
loss: 1.988297  [51200/60000]
loss: 2.094260  [57600/60000]
Test Error: 
 Accuracy: 43.7%, Avg loss: 1.922514 

Epoch 5
-------------------------------
loss: 1.859427  [    0/60000]
loss: 1.771873  [ 6400/60000]
loss: 1.748458  [12800/60000]
loss: 1.980416  [19200/60000]
loss: 1.705132  [25600/60000]
loss: 1.869924  [32000/60000]
loss: 1.932936  [38400/60000]
loss: 1.846556  [44800/60000]
loss: 1.907762  [51200/60000]
loss: 2.048297  [57600/60000]
Test Error: 
 Accuracy: 45.5%, Avg loss: 1.836276 

Epoch 6
-------------------------------
loss: 1.756786  [    0/60000]
loss: 1.666563  [ 6400/60000]
loss: 1.639473  [12800/60000]
loss: 1.913608  [19200/60000]
loss: 1.607317  [25600/60000]
loss: 1.796450  [32000/60000]
loss: 1.869269  [38400/60000]
loss: 1.780057  [44800/60000]
loss: 1.836973  [51200/60000]
loss: 2.005264  [57600/60000]
Test Error: 
 Accuracy: 46.4%, Avg loss: 1.763771 

Epoch 7
-------------------------------
loss: 1.671694  [    0/60000]
loss: 1.583290  [ 6400/60000]
loss: 1.552116  [12800/60000]
loss: 1.855574  [19200/60000]
loss: 1.530599  [25600/60000]
loss: 1.738206  [32000/60000]
loss: 1.817262  [38400/60000]
loss: 1.729147  [44800/60000]
loss: 1.778801  [51200/60000]
loss: 1.968068  [57600/60000]
Test Error: 
 Accuracy: 47.1%, Avg loss: 1.704678 

Epoch 8
-------------------------------
loss: 1.603338  [    0/60000]
loss: 1.517485  [ 6400/60000]
loss: 1.483250  [12800/60000]
loss: 1.808101  [19200/60000]
loss: 1.470923  [25600/60000]
loss: 1.692855  [32000/60000]
loss: 1.776114  [38400/60000]
loss: 1.690363  [44800/60000]
loss: 1.733196  [51200/60000]
loss: 1.936111  [57600/60000]
Test Error: 
 Accuracy: 47.6%, Avg loss: 1.657511 

Epoch 9
-------------------------------
loss: 1.548624  [    0/60000]
loss: 1.465172  [ 6400/60000]
loss: 1.428264  [12800/60000]
loss: 1.769706  [19200/60000]
loss: 1.425173  [25600/60000]
loss: 1.655886  [32000/60000]
loss: 1.741891  [38400/60000]
loss: 1.660177  [44800/60000]
loss: 1.695678  [51200/60000]
loss: 1.906624  [57600/60000]
Test Error: 
 Accuracy: 48.2%, Avg loss: 1.618601 

Epoch 10
-------------------------------
loss: 1.503637  [    0/60000]
loss: 1.422200  [ 6400/60000]
loss: 1.383088  [12800/60000]
loss: 1.737924  [19200/60000]
loss: 1.388968  [25600/60000]
loss: 1.625465  [32000/60000]
loss: 1.713246  [38400/60000]
loss: 1.635674  [44800/60000]
loss: 1.663661  [51200/60000]
loss: 1.880327  [57600/60000]
Test Error: 
 Accuracy: 48.6%, Avg loss: 1.585916 

Done!

Saving Models

保存模型的一种常见方法是序列化内部状态字典(包含模型参数)。(默认保存到当前工作目录下)

torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth

Loading Models

加载模型包括 重建模型结构 和向模型中 加载状态字典 两步。

model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
<All keys matched successfully>
classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
]

model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
    pred = model(x)
    predicted, actual = classes[pred[0].argmax(0)], classes[y]
    print(f'Predicted: "{predicted}", Actual: "{actual}"')
Predicted: "Ankle boot", Actual: "Ankle boot"

训了十轮还是比例程的5轮准点,233333

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值