Pytorch训练集和测试集划分及训练集批次的划分与加载—torch.utils.data.TensorDataset、torch.utils.data.random_split、DataLoader

import torch
from torch.utils.data import random_split
import torch.utils.data as Data
train_x = torch.randn(10,8)
train_y = torch.randn(10,2)
print(train_x)
print(train_y)

输出结果:
tensor([[ 0.5008, 0.9868, -0.7672, -1.1820, -2.6178, 1.4705, -1.7990, -1.1078],
[-1.5842, 1.5206, 0.7272, -2.5919, 0.8682, 0.8757, 0.4569, -0.2744],
[-0.2444, -0.5412, -0.1766, 1.2055, -0.3636, 0.7021, -1.1178, 0.0898],
[ 0.4265, 0.0072, 0.0930, -0.6339, -0.9330, 0.5838, 0.0063, -1.0317],
[-0.5715, -0.0705, -1.4860, -0.6964, -0.6595, -0.1626, -0.9456, -1.3202],
[ 0.6300, 0.5818, -0.9379, 0.9910, -0.9728, -0.4468, 0.9327, 1.1673],
[-1.4601, -0.2334, 0.4478, 0.9095, -0.3818, 0.4027, 0.4042, 0.0059],
[-0.0446, -1.7432, -0.6294, -0.4040, 0.2583, -0.3803, 0.0877, 0.5360],
[ 2.0558, 1.5085, 0.5044, 0.3813, -0.7915, -1.5292, 0.2047, -1.0494],
[ 0.8640, 0.3738, 1.4807, 0.9262, 0.3545, 0.9699, -2.2665, 0.3594]])
tensor([[-1.0027, 0.1449],
[ 0.2390, -1.5291],
[ 0.1028, 0.3678],
[-0.1806, 2.0617],
[ 0.0627, -0.7183],
[-1.7710, -0.2113],
[ 1.3260, 0.6122],
[-0.3938, 0.5924],
[ 1.3044, 0.8457],
[ 0.3679, -1.9822]])
打包训练数据和训练数据的标签

dataset = Data.TensorDataset(train_x,train_y)  #把训练集和标签继续封装

把训练数据和训练数据的标签一起划分为8:2

train_data,eval_data=random_split(dataset,[round(0.8*train_x.shape[0]),round(0.2*train_x.shape[0])],generator=torch.Generator().manual_seed(42)) 
for i in train_data:
    print(i)

输出结果:(8份,说明划分成功)
(tensor([-0.2444, -0.5412, -0.1766, 1.2055, -0.3636, 0.7021, -1.1178, 0.0898]), tensor([0.1028, 0.3678]))
(tensor([-1.4601, -0.2334, 0.4478, 0.9095, -0.3818, 0.4027, 0.4042, 0.0059]), tensor([1.3260, 0.6122]))
(tensor([-1.5842, 1.5206, 0.7272, -2.5919, 0.8682, 0.8757, 0.4569, -0.2744]), tensor([ 0.2390, -1.5291]))
(tensor([ 2.0558, 1.5085, 0.5044, 0.3813, -0.7915, -1.5292, 0.2047, -1.0494]), tensor([1.3044, 0.8457]))
(tensor([-0.5715, -0.0705, -1.4860, -0.6964, -0.6595, -0.1626, -0.9456, -1.3202]), tensor([ 0.0627, -0.7183]))
(tensor([ 0.6300, 0.5818, -0.9379, 0.9910, -0.9728, -0.4468, 0.9327, 1.1673]), tensor([-1.7710, -0.2113]))
(tensor([ 0.5008, 0.9868, -0.7672, -1.1820, -2.6178, 1.4705, -1.7990, -1.1078]), tensor([-1.0027, 0.1449]))
(tensor([ 0.8640, 0.3738, 1.4807, 0.9262, 0.3545, 0.9699, -2.2665, 0.3594]), tensor([ 0.3679, -1.9822]))
按照批次划分与加载

loader = Data.DataLoader(dataset = train_data, batch_size = 2, shuffle = True, num_workers = 0 , drop_last=False)
for step,(train_x,train_y) in enumerate(loader):
    print(step,':',(train_x,train_y))

输出结果:(输出4个批次,每个批次中2个样本,因为batch_size=2)
0 : (tensor([[ 0.5008, 0.9868, -0.7672, -1.1820, -2.6178, 1.4705, -1.7990, -1.1078],
[-0.5715, -0.0705, -1.4860, -0.6964, -0.6595, -0.1626, -0.9456, -1.3202]]), tensor([[-1.0027, 0.1449],
[ 0.0627, -0.7183]]))
1 : (tensor([[ 0.6300, 0.5818, -0.9379, 0.9910, -0.9728, -0.4468, 0.9327, 1.1673],
[ 2.0558, 1.5085, 0.5044, 0.3813, -0.7915, -1.5292, 0.2047, -1.0494]]), tensor([[-1.7710, -0.2113],
[ 1.3044, 0.8457]]))
2 : (tensor([[-0.2444, -0.5412, -0.1766, 1.2055, -0.3636, 0.7021, -1.1178, 0.0898],
[ 0.8640, 0.3738, 1.4807, 0.9262, 0.3545, 0.9699, -2.2665, 0.3594]]), tensor([[ 0.1028, 0.3678],
[ 0.3679, -1.9822]]))
3 : (tensor([[-1.5842, 1.5206, 0.7272, -2.5919, 0.8682, 0.8757, 0.4569, -0.2744],
[-1.4601, -0.2334, 0.4478, 0.9095, -0.3818, 0.4027, 0.4042, 0.0059]]), tensor([[ 0.2390, -1.5291],
[ 1.3260, 0.6122]]))

代码:

import torch
from torch.utils.data import random_split
import torch.utils.data as Data
train_x = torch.randn(10,8)
train_y = torch.randn(10,2)
print(train_x)
print(train_y)
dataset = Data.TensorDataset(train_x,train_y)  #把训练集和标签继续封装
train_data,eval_data=random_split(dataset,[round(0.8*train_x.shape[0]),round(0.2*train_x.shape[0])],generator=torch.Generator().manual_seed(42))  #把数据机随机切分训练集和验证集
for i in train_data:
    print(i)
loader = Data.DataLoader(dataset = train_data, batch_size = 2, shuffle = True, num_workers = 0 , drop_last=False)
for step,(train_x,train_y) in enumerate(loader):
    print(step,':',(train_x,train_y))
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值