它俩的细节参考链接都说的很明白我就不赘述了,我主要讲一下我那lstm处理mnist数据集的时候需要对数据集进行一个处理,方便把数据按模型input_size设定的那样喂给它。
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Hyper-parameters
sequence_length = 28
input_size = 28
hidden_size = 128
num_layers = 2
num_classes = 10
batch_size = 100
num_epochs = 2
learning_rate = 0.01
# MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='../../data/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data/',
train=False,
transform=transforms.ToTensor())
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
print(images.size())
images = images.reshape(-1,sequence_length,input_size).to(device)
# images = images.view(-1,sequence_length,input_size).to(device)
print(images.size())
labels = labels.to(device)
打印出来的size正如你所见:
通过enumerate(trai_loader)出来的:torch.Size([100, 1, 28, 28])
通过reshape或者view改变shape后的:torch.Size([100, 28, 28])
100是batch_size,1是代表mnist灰色图像只有一个通道,28width,28height
咱再举一反三打个比方,
images = images.reshape(-1,2,sequence_length, input_size).to(device)
# images = images.view(-1,2,sequence_length, input_size).to(device)
print(images.size())
打印出来的size正如你所见:
通过enumerate(trai_loader)出来的:torch.Size([100, 1, 28, 28])
通过reshape或者view改变shape后的:torch.Size([50,2, 28, 28])
自动把batchsize变成了50,但总的数据量少没有变的,你,学废了吗?
关于它俩的详细博客很多了我给大家整理如下:
详解torch.view()的-1参数是什么意思
详解torch.view()的-1参数是什么意思_谷歌玩家的博客-CSDN博客_tensor.view(-1)
【Pytorch】X.view(-1)操作//整理的好
【Pytorch】X.view(-1)操作_马鹏森的博客-CSDN博客_.view(-1)
Python中reshape函数参数-1的意思?//好货且懂