torch dataloader 数据并行_【PyTorch修炼】三、先做减法,具体例子带你了解torch使用的基本套路(分类和时间序列小例子)...

5fc19b2b03bf8bb854484286bae0f3b5.png

一、前言

最近确实太忙了,没能很好的写这个系列的文章,但是大家不要急,好事多磨,我要保证每一篇的质量。

今天要讲的是利用torch去做实验,大体完成一个任务的写法框架思路。

因为这节之后我会大量的举例以及各种完整的代码细节讲解,大概意思就是我想利用例子把很多细节抠出来这样来说明一些知识点,这样操作可能要比直接列出来知识点更具体些,大家接受这些知识点也能更加的舒服一些。

二、做减法,思考整个流程都需要什么?

我们一起思考下。

首先肯定要有模型和数据,建立模型以及对应的数据流,之后我们要去算损失函数,当然通过它反向传播算出来的梯度我们要去更新,所以需要优化器,所以必不可少的四大件:数据、模型、损失函数、优化器

我们把这四大件对应于torch框架:

  1. 数据对应于常用的dataset和dataloader,当然肯定会有人问不用行不行,当然可以,但是torch框架给你安排好让你顺手使用的数据输入方式,你偏要自己搞,那不是给自己找了个小麻烦嘛,这个我在上一讲中讲过。
  2. 模型,建立模型的方式有很多写法,这个我会单独拿一篇来讲
  3. 损失函数,利用torch.nn中的一些损失来建立,也可以自己写自定义的,这个我已发过,可以在公众号号中搜索下【DL知识拾贝】系列中所讲的torch版本损失函数
  4. 优化器,利用torch.optim来建立,也可以参考知识【DL知识拾贝】优化器大详解

三、举例之分类问题

这里使用mnist数据,对其进行分类,一共有0-9十个数字,也就是10类。

不多废话,直接show you my code 直接看代码我个人觉得最直接,高效

3.1 导入包

import torch
import torchvision
import torch.utils.data.dataloader as Data
import torch.nn as nn

3.2 dataset and dataloader

一般分为训练集和验证集,之后训练好的模型去用测试集进行真正的分类

train_data = torchvision.datasets.MNIST(
    './mnist', train=True, transform=torchvision.transforms.ToTensor(), download=True
)
test_data = torchvision.datasets.MNIST(
    './mnist', train=False, transform=torchvision.transforms.ToTensor()
)


train_loader = Data.DataLoader(dataset=train_data, batch_size=64, shuffle=True)
test_loader = Data.DataLoader(dataset=test_data, batch_size=64)

3.3 定义 model

这里我是把mnist当作一个长向量来处理的,利用了Linear、一维的BatchNormLeakyRelu

这里model的定义写法我采用较为广泛的class以及nn.Sequential

class ClasNet(nn.Module):
    def __init__(self):
        super(ClasNet, self).__init__()
        self.hidden = nn.Sequential(
            nn.Linear(28*28, 1024),
            nn.BatchNorm1d(1024),
            nn.LeakyReLU(0.2)
        )
        self.output = nn.Sequential(
            nn.Linear(1024, 512),
            nn.ReLU(),
            nn.Linear(512, 128),
            nn.ReLU(),
            nn.Linear(128, 10)
        )

    def forward(self, x):
        x = x.view(-1, 28*28)
        x = self.hidden(x)
        out = self.output(x)
        return out

3.4 建立model,以及损失和优化器的出场

一般可以print(model),这样直接能看到自己定义的结构参数

model = ClasNet()

optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

loss_func = nn.CrossEntropyLoss()

3.5 训练过程

这个过程其实有套路可循

  1. 最外层是epoch循环
  2. 里面跟一个dataloader
  3. 取出feature_batch and label_batch
  4. 模型前向推理
  5. pred和label算loss
  6. 优化器把梯度置为0
  7. 损失函数算反向传播的梯度
  8. 优化器进行梯度更新
  9. 其他操作,比如tensorfboard 你用以上我总说的和下面的代码一一对应
for epoch in range(35):
    print('epoch {}'.format(epoch + 1))
    # training process
    model.train()
    for i, (batch_x, batch_y) in enumerate(train_loader):
        out = model(batch_x)
        
        loss = loss_func(out, batch_y)
        
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

3.6.1 全部代码

这里我完善了一些,因为是分类问题,所以我这里加上了算acc的代码,以及验证集处理和训练的代码

import torch
import torchvision
import torch.utils.data.dataloader as Data
import torch.nn as nn

train_data = torchvision.datasets.MNIST(
    './mnist', train=True, transform=torchvision.transforms.ToTensor(), download=True
)
test_data = torchvision.datasets.MNIST(
    './mnist', train=False, transform=torchvision.transforms.ToTensor()
)


train_loader = Data.DataLoader(dataset=train_data, batch_size=64, shuffle=True)
test_loader = Data.DataLoader(dataset=test_data, batch_size=64)


class ClasNet(nn.Module):
    def __init__(self):
        super(ClasNet, self).__init__()
        self.hidden = nn.Sequential(
            nn.Linear(28*28, 1024),
            nn.BatchNorm1d(1024),
            nn.LeakyReLU(0.2)
        )
        self.output = nn.Sequential(
            nn.Linear(1024, 512),
            nn.ReLU(),
            nn.Linear(512, 128),
            nn.ReLU(),
            nn.Linear(128, 10)
        )

    def forward(self, x):
        x = x.view(-1, 28*28)
        x = self.hidden(x)
        out = self.output(x)
        return out


model = ClasNet()
print(model)

optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

loss_func = nn.CrossEntropyLoss()


for epoch in range(25):
    print('epoch {}'.format(epoch + 1))
    # training process
    train_loss = 0.
    train_acc = 0.
    model.train()
    for i, (batch_x, batch_y) in enumerate(train_loader):
        # batch_x = batch_x.cuda()
        # batch_y = batch_y.cuda()
        out = model(batch_x)
        loss = loss_func(out, batch_y)
        train_loss += loss.item()
        pred = torch.max(out, 1)[1]
        train_correct = (pred == batch_y).sum()
        train_acc += train_correct.item()
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss / (len(
            train_data)), train_acc / (len(train_data))))

    # eval process
    model.eval()
    eval_loss = 0.
    eval_acc = 0.
    for j,(batch_x, batch_y) in enumerate(test_loader):
        # batch_x, batch_y = batch_x.cuda(), batch_y.cuda()
        out = model(batch_x)
        loss = loss_func(out, batch_y)
        eval_loss += loss.item()
        pred = torch.max(out, 1)[1]
        num_correct = (pred == batch_y).sum()
        eval_acc += num_correct.item()

    print('eval Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss / (len(
        test_data)), eval_acc / (len(test_data))))

3.6.2 全部代码+其他(tensorboard)

这里写了个升级版本,加了个tensorboard可视化,也是为了在文章中给大家更多的效果演示。 大家应该能发现我这里都是用的CPU版本,为什么没有用GPU呢?首先因为这里数据量都不大,CPU也很快,其次我准备把GPU的使用以及服务器的一些操作放一篇中一起总结,敬请期待。

对于tensorboard我以前就写过,准备过段时间写更细致的使用,之前的文章是学tensorflow的时候写的,想想一晃也大概两年前了,时光飞逝啊~

import torch
import torchvision
import torch.utils.data.dataloader as Data
import torch.nn as nn
from torch.utils.tensorboard import SummaryWriter

train_data = torchvision.datasets.MNIST(
    './mnist', train=True, transform=torchvision.transforms.ToTensor(), download=True
)
test_data = torchvision.datasets.MNIST(
    './mnist', train=False, transform=torchvision.transforms.ToTensor()
)


train_loader = Data.DataLoader(dataset=train_data, batch_size=64, shuffle=True)
test_loader = Data.DataLoader(dataset=test_data, batch_size=64)


class ClasNet(nn.Module):
    def __init__(self):
        super(ClasNet, self).__init__()
        self.hidden = nn.Sequential(
            nn.Linear(28*28, 1024),
            nn.BatchNorm1d(1024),
            nn.LeakyReLU(0.2)
        )
        self.output = nn.Sequential(
            nn.Linear(1024, 512),
            nn.ReLU(),
            nn.Linear(512, 128),
            nn.ReLU(),
            nn.Linear(128, 10)
        )

    def forward(self, x):
        x = x.view(-1, 28*28)
        x = self.hidden(x)
        out = self.output(x)
        return out


model = ClasNet()
print(model)

writer = SummaryWriter()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

loss_func = nn.CrossEntropyLoss()

for epoch in range(25):
    print('epoch {}'.format(epoch + 1))
    # training process
    train_loss = 0.
    train_acc = 0.
    model.train()
    for i, (batch_x, batch_y) in enumerate(train_loader):
        # batch_x = batch_x.cuda()
        # batch_y = batch_y.cuda()
        out = model(batch_x)
        loss = loss_func(out, batch_y)
        train_loss += loss.item()
        pred = torch.max(out, 1)[1]
        train_correct = (pred == batch_y).sum()
        train_acc += train_correct.item()
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    writer.add_scalar("loss_train", train_loss, epoch)
    writer.add_scalar("acc_train", train_acc, epoch)
    print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss, train_acc ))

    # eval process
    model.eval()
    eval_loss = 0.
    eval_acc = 0.
    for j,(batch_x, batch_y) in enumerate(test_loader):
        # batch_x, batch_y = batch_x.cuda(), batch_y.cuda()
        out = model(batch_x)
        loss = loss_func(out, batch_y)
        eval_loss += loss.item()
        pred = torch.max(out, 1)[1]
        num_correct = (pred == batch_y).sum()
        eval_acc += num_correct.item()

    writer.add_scalar("loss_eval", eval_loss, epoch)
    writer.add_scalar("acc_eval", eval_acc, epoch)
    print('Test Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss , eval_acc))

writer.close()

3.7 例子结果

输出:(loss打印就不展示了,直接tensorboard走起)

ClasNet(
  (hidden): Sequential(
    (0): Linear(in_features=784, out_features=1024, bias=True)
    (1): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.2)
  )
  (output): Sequential(
    (0): Linear(in_features=1024, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=128, bias=True)
    (3): ReLU()
    (4): Linear(in_features=128, out_features=10, bias=True)
  )
)

在本目录下直接 tensorboard --logdir runs acc:

09354c3300818f10cf78b3492d2e3cab.png

loss:

9f108c945863ac6dcbcaaf844c4abb45.png

这里我稍微小小的训练了下,可以看到在epoch9之后就开始出现过拟合情况了。

四、举例之预测问题

预测问题当然是举例时间序列问题了,这里使用LSTM对温度数据进行预测,和mnist不同的是,mnist在torch中已经有集成的dataset了,直接调用即可,官方写好了,而这次用的数据集是没有的所以需要我们自己来写,可以参考我的Pytorch系列第二篇

cae5309815a8b0353bc5f41fb9460dff.png

数据只有两列,是基本的单变量时间序列预测问题,这个在这里不细讲了,以后准备写一个时间序列的专栏。 一列是时间,一列是温度,时间粒度是一天,也就是每一天测量一个温度,我准备用前两个月去预测未来的1天的温度值。

不多废话,直接show you my code 直接看代码我个人觉得最直接,高效

4.1 导入包

import torch
from torch.utils.data import Dataset,DataLoader
import torch.nn as nn
import pandas as pd
import numpy as np
from torch.utils.tensorboard import SummaryWriter

4.2 dataset and dataloader

这里需要自己写,我写的比较简单,你可以让操作性更大一些

class MyDataset(Dataset):

    def __init__(self, path, input_seqlen=10, output_seqlen=1, fea_num=1, train_precent=0.8, isTrain=True):
        data_df = pd.read_csv(path)
        Temp = data_df['Temp'].values

        self.data_num = len(Temp)
        self.input_seqlen = input_seqlen
        self.output_seqlen = output_seqlen
        self.fea_num = fea_num
        self.all_seqlen = self.input_seqlen + self.output_seqlen
        self.train_index = int(self.data_num*train_precent)

        self.data_seq = []
        self.target_seq = []

        for i in range(self.data_num - self.all_seqlen):
            self.data_seq.append(list(Temp[i:i + self.input_seqlen]))
            self.target_seq.append(list(Temp[i + self.input_seqlen: i + self.all_seqlen]))

        if isTrain is True:
            self.data_seq = self.data_seq[:self.train_index]
            self.target_seq = self.target_seq[:self.train_index]

        else:
            self.data_seq = self.data_seq[self.train_index:]
            self.target_seq = self.target_seq[self.train_index:]

        self.data_seq = np.array(self.data_seq).reshape((len(self.data_seq), -1, fea_num))
        self.target_seq = np.array(self.target_seq).reshape((len(self.target_seq), -1, fea_num))

        self.data_seq = torch.from_numpy(self.data_seq).type(torch.float32)
        self.target_seq = torch.from_numpy(self.target_seq).type(torch.float32)

    def __getitem__(self, index):
        return self.data_seq[index], self.target_seq[index]

    def __len__(self):
        return len(self.data_seq)

4.3 定义 model

class RegrNet(nn.Module):
    def __init__(self):
        super(RegrNet, self).__init__()
        self.hidden1 = nn.LSTM(input_size=1, hidden_size=256, num_layers=2)
        self.hidden2 = nn.Linear(256, 64)
        self.out = nn.Linear(64, 1)

    def forward(self, x):
        out, (h_n, c_n) = self.hidden1(x)
        out = out[-1:]
        out = out.reshape(-1, 256)
        out = self.hidden2(out)
        out = self.out(out)
        out = out.reshape(1, -1, 1)
        return out

4.4 建立model,以及损失和优化器的出场

一般可以print(model),这样直接能看到自己定义的结构参数,这里采用了GPU,要不训练确实慢一些。

model = RegrNet().cuda()
print(model)

optimizer = torch.optim.Adam(model.parameters(), lr=LR)
mse = torch.nn.MSELoss()

4.5 训练过程

for epoch in range(EPOCH):
    # print('epoch {}'.format(epoch + 1))
    # training process
    train_loss = 0.
    train_acc = 0.
    model.train()
    for i, (batch_x, batch_y) in enumerate(data_loader):
        #print(batch_x.shape)
        #print(batch_y.shape)
        batch_x = batch_x.permute(1, 0, 2).cuda()
        batch_y = batch_y.permute(1, 0, 2).cuda()
        out = model(batch_x)
        #print(out.shape)
        loss = mse(out, batch_y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    writer.add_scalar("loss_train", loss.item(), epoch)
    print('epoch : {}, Train Loss: {:.6f}'.format(epoch, loss.item()))

writer.close()

4.6 全部代码+其他(tensorboard)

import torch
from torch.utils.data import Dataset,DataLoader
import torch.nn as nn
import pandas as pd
import numpy as np
from torch.utils.tensorboard import SummaryWriter


class MyDataset(Dataset):

    def __init__(self, path, input_seqlen=10, output_seqlen=1, fea_num=1, train_precent=0.8, isTrain=True):
        data_df = pd.read_csv(path)
        Temp = data_df['Temp'].values

        self.data_num = len(Temp)
        self.input_seqlen = input_seqlen
        self.output_seqlen = output_seqlen
        self.fea_num = fea_num
        self.all_seqlen = self.input_seqlen + self.output_seqlen
        self.train_index = int(self.data_num*train_precent)

        self.data_seq = []
        self.target_seq = []

        for i in range(self.data_num - self.all_seqlen):
            self.data_seq.append(list(Temp[i:i + self.input_seqlen]))
            self.target_seq.append(list(Temp[i + self.input_seqlen: i + self.all_seqlen]))

        if isTrain is True:
            self.data_seq = self.data_seq[:self.train_index]
            self.target_seq = self.target_seq[:self.train_index]

        else:
            self.data_seq = self.data_seq[self.train_index:]
            self.target_seq = self.target_seq[self.train_index:]

        self.data_seq = np.array(self.data_seq).reshape((len(self.data_seq), -1, fea_num))
        self.target_seq = np.array(self.target_seq).reshape((len(self.target_seq), -1, fea_num))

        self.data_seq = torch.from_numpy(self.data_seq).type(torch.float32)
        self.target_seq = torch.from_numpy(self.target_seq).type(torch.float32)

    def __getitem__(self, index):
        return self.data_seq[index], self.target_seq[index]

    def __len__(self):
        return len(self.data_seq)

class RegrNet(nn.Module):
    def __init__(self):
        super(RegrNet, self).__init__()
        self.hidden1 = nn.LSTM(input_size=1, hidden_size=256, num_layers=2)
        self.hidden2 = nn.Linear(256, 64)
        self.out = nn.Linear(64, 1)

    def forward(self, x):
        out, (h_n, c_n) = self.hidden1(x)
        out = out[-1:]
        out = out.reshape(-1, 256)
        out = self.hidden2(out)
        out = self.out(out)
        out = out.reshape(1, -1, 1)
        return out

# parameters
path = './data/daily-min-temperatures.csv'
INPUT_SEQLEN = 90
OUTPUT_SEQLEN = 1
EPOCH = 50
LR = 0.001

# dataset instantiation
mydata = MyDataset(path, input_seqlen=INPUT_SEQLEN, output_seqlen=OUTPUT_SEQLEN)

# input, target = mydata.__getitem__(2)

# dataloader
data_loader = DataLoader(dataset=mydata,
                          batch_size=64,
                          shuffle=True)


model = RegrNet().cuda()
print(model)

optimizer = torch.optim.Adam(model.parameters(), lr=LR)
mse = torch.nn.MSELoss()

writer = SummaryWriter()

for epoch in range(EPOCH):
    # print('epoch {}'.format(epoch + 1))
    # training process
    train_loss = 0.
    train_acc = 0.
    model.train()
    for i, (batch_x, batch_y) in enumerate(data_loader):
        #print(batch_x.shape)
        #print(batch_y.shape)
        batch_x = batch_x.permute(1, 0, 2).cuda()
        batch_y = batch_y.permute(1, 0, 2).cuda()
        out = model(batch_x)
        #print(out.shape)
        loss = mse(out, batch_y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    writer.add_scalar("loss_train", loss.item(), epoch)
    print('epoch : {}, Train Loss: {:.6f}'.format(epoch, loss.item()))

writer.close()

4.7 结果

5cf4cb3e23eb60d0b54ecc16c1cd0f4b.png

5. 数据、代码和总结

最近实在是代码看多了,paper看多了,实在是学不进去随笔写了一篇讲解这个部分,估计很少有人专门把这个套路拿出来讲解的,但我感觉有必要,当然是对torch不熟悉的朋友,我一直都在摸索很多东西的潜在规律,这篇算是我对torch简单规律的摸索吧,写出来分享给大家,这篇文章其实写花的时间并不多,大概1h+,但是找合适的数据集以及写demo并调试demo花费的时间还是有一些的,起码算起来有3h,所以创作真的不易,喜欢就点个赞,转发一手~

代码和数据连接:

https://github.com/chehongshu/AIwoniuche_Learning/tree/master/Pytorch_easy_examples​github.com
  • 4
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值