pytorch学习笔记(1)--QUICKSTART

系列文章目录

pytorch学习笔记(1)–QUICKSTART
pytorch学习笔记(2)–Tensor
pytorch学习笔记(3)–数据集与数据导入
pytorch学习笔记(4)-- 创建模型(Build Model)
pytorch学习笔记(5)–Autograd


一、数据

  • torch.utils.data.DataLoader
  • torch.utils.data.Dataset

Dataset 存储样本及其相应的标签,DataLoader 围绕 Dataset 包装一个可迭代对象

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor

Pytorch提供了一些特定领域的库,如: TorchText, TorchVision, and TorchAudio,我们这里使用TorchVision的数据集。
torchvision.datasets模块包括了一些真实世界的视觉数据Dataset对象,如CIFAR, COCO(具体见链接),每一个TorchVision数据集都包含两个参数,transformtarget_transform,分别用于修改样本和标签。
这里,我们使用FasionMNIST数据集,包含60000个训练样本和10000个测试样本,每个样本大小为28*28,样本标签有10个类别。

  • 每张图片宽W=28像素点,高H=28像素点,一共784个像素。
  • 10个类别标签,如T恤、裤子、裙子、包等
  • 每个像素的灰度值范围为[0,255],0表示白色,255表示黑色。
    数据下载的方式如下:
  • root::训练/测试数据的存储路径
  • train:用于区分训练集/测试集
  • download = True:如果root没有数据集的话,从网上下载这些数据集
  • transform 和target_transform区分特征和标签转换。
#下载训练数据:
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor(),
)
#下载测试数据:
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor(),
)

输出:

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
100%|██████████| 26421880/26421880 [00:02<00:00, 12653440.09it/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
100%|██████████| 29515/29515 [00:00<00:00, 197742.45it/s]
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
100%|██████████| 4422102/4422102 [00:01<00:00, 3719347.84it/s]
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
100%|██████████| 5148/5148 [00:00<00:00, 16935119.21it/s]Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw

我们将“Dataset”作为参数传递给“DataLoader”。 这在我们的数据集上包装了一个可迭代对象,并支持
自动批处理、采样、shuffing和多进程数据加载。 这里我们定义batch size为64,即每个元素
在 dataloader 中,iterable 将返回一批 64 个特征和标签。

batch_size = 64

#创建data loaders
train_dataloader = DataLoader(traning_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

for X,y in test_dataloader:
    print(f'Shape of X [N,C,H,W]: {X.shape}')
    print(f"Shape of y: {y.shape}{y.dtype}")

输出:

Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28]) 
Shape of y: torch.Size([64]) torch.int64

二、模型创建

我们创建一个继承nn.Module的类,在__init__函数中定义网络层,在forward函数中制定数据在网络中的传递方式,为了加速神经网络的操作,在条件允许的情况下,我们把它放在GPU或MPS上。

# Get cpu, gpu or mps device for training.
device = (
    "cuda"
    if torch.cuda.is_available()
    else "mps"
    if torch.backends.mps.is_available()
    else "cpu"
)
print(f"Using {device} device")

# Define model
class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork().to(device)
print(model)

输出:

Using cpu device
NeuralNetwork (
(flatten): Flatten(start_dim=1, end_dim =- 1)
(linear_relu_stack): Sequential (
(0): Linear(in_features=784, out_features=512, bias=True)
(1) : ReLU ()
(2): Linear(in_features=512, out_features=512, bias=True)
(3) : ReLU ()
(4): Linear(in_features=512, out_features=10, bias=True)
)

三、优化模型参数

要训练一个模型,需要损失函数和优化器

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

在单个训练循环中,模型对训练数据集(批量输入)进行预测,并反向传播预测误差以调整模型的参数.

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        X, y = X.to(device), y.to(device)

        # Compute prediction error
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

        if batch % 100 == 0:
            loss, current = loss.item(), (batch + 1) * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")

.to()是为了将x和y转换为相同的设备类型,此处为都转换为CPU。
我们还根据测试数据集检查模型的性能,以确保它正在学习。
.item()用于在只包含一个元素的tensor中提取值,注意是只包含一个元素,否则的话使用.tolist(), 在训练时统计loss变化时,会用到loss.item(),能够防止tensor无线叠加导致的显存爆炸。

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

训练过程需要进行多次迭代(epoch)。在每个epoch,模型都会学习参数以做出更好的预测。我们打印每个epoch模型的准确率和损失;我们希望看到每个 epoch 的准确率都会提高,损失会减少。

epochs = 5
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train(train_dataloader, model, loss_fn, optimizer)
    test(test_dataloader, model, loss_fn)
print("Done!")

输出:

Epoch 1
-------------------------------
loss: 2.300013  [   64/60000]
loss: 2.285723  [ 6464/60000]
loss: 2.264171  [12864/60000]
loss: 2.269746  [19264/60000]
loss: 2.237850  [25664/60000]
loss: 2.209016  [32064/60000]
loss: 2.220058  [38464/60000]
loss: 2.172483  [44864/60000]
loss: 2.181570  [51264/60000]
loss: 2.152840  [57664/60000]
Test Error: 
 Accuracy: 41.5%, Avg loss: 2.142295 

Epoch 2
-------------------------------
loss: 2.153325  [   64/60000]
loss: 2.140253  [ 6464/60000]
loss: 2.073918  [12864/60000]
loss: 2.107593  [19264/60000]
loss: 2.037779  [25664/60000]
loss: 1.972155  [32064/60000]
loss: 2.018155  [38464/60000]
loss: 1.918635  [44864/60000]
loss: 1.939209  [51264/60000]
loss: 1.863457  [57664/60000]
Test Error: 
 Accuracy: 53.6%, Avg loss: 1.857544 

Epoch 3
-------------------------------
loss: 1.893079  [   64/60000]
loss: 1.857361  [ 6464/60000]
loss: 1.731167  [12864/60000]
loss: 1.793227  [19264/60000]
loss: 1.667868  [25664/60000]
loss: 1.620297  [32064/60000]
loss: 1.659504  [38464/60000]
loss: 1.549238  [44864/60000]
loss: 1.580384  [51264/60000]
loss: 1.479247  [57664/60000]
Test Error: 
 Accuracy: 62.5%, Avg loss: 1.492322 

Epoch 4
-------------------------------
loss: 1.560190  [   64/60000]
loss: 1.524133  [ 6464/60000]
loss: 1.371673  [12864/60000]
loss: 1.455458  [19264/60000]
loss: 1.332915  [25664/60000]
loss: 1.333250  [32064/60000]
loss: 1.355447  [38464/60000]
loss: 1.274925  [44864/60000]
loss: 1.309374  [51264/60000]
loss: 1.217096  [57664/60000]
Test Error: 
 Accuracy: 63.6%, Avg loss: 1.237712 

Epoch 5
-------------------------------
loss: 1.315709  [   64/60000]
loss: 1.297441  [ 6464/60000]
loss: 1.132005  [12864/60000]
loss: 1.244349  [19264/60000]
loss: 1.119326  [25664/60000]
loss: 1.145893  [32064/60000]
loss: 1.171785  [38464/60000]
loss: 1.104046  [44864/60000]
loss: 1.142680  [51264/60000]
loss: 1.064939  [57664/60000]
Test Error: 
 Accuracy: 64.6%, Avg loss: 1.081054 

Done!

三、保存模型

torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth

四、导入模型

model = NeuralNetwork().to(device)
model.load_state_dict(torch.load("model.pth"))
<All keys matched successfully>

五、看看效果

现在就可以使用模型进行预测了

classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
]

model.eval()
x, y = test_data[199][0], test_data[199][1]
print(x.shape)
print(y)
with torch.no_grad():
    x = x.to(device)
    pred = model(x)
    predicted, actual = classes[pred[0].argmax(0)], classes[y]
    print(f'Predicted: "{predicted}", Actual: "{actual}"')
torch.Size([1, 28, 28])
1
Predicted: "Trouser", Actual: "Trouser"

x为28*28的图像数据,y为它的类别标签

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值