torch.utils.data.DataLoader使用

torch.utils.data.DataLoader(dataset, batch_size, shuffle=True, drop_last=True)

在这里插入图片描述

Init signature: torch.utils.data.DataLoader(*args, **kwds)
Docstring:     
Data loader. Combines a dataset and a sampler, and provides an iterable over
the given dataset.

The :class:`~torch.utils.data.DataLoader` supports both map-style and
iterable-style datasets with single- or multi-process loading, customizing
loading order and optional automatic batching (collation) and memory pinning.

See :py:mod:`torch.utils.data` documentation page for more details.

Args:
    dataset (Dataset): dataset from which to load the data.(就是你的数据集)
    batch_size (int, optional): how many samples per batch to load
        (default: ``1``).(每次批量处理的数量,即批处理)
    shuffle (bool, optional): set to ``True`` to have the data reshuffled
        at every epoch (default: ``False``).(是否将数据集的顺序打乱)
    sampler (Sampler or Iterable, optional): defines the strategy to draw
        samples from the dataset. Can be any ``Iterable`` with ``__len__``
        implemented. If specified, :attr:`shuffle` must not be specified.
    batch_sampler (Sampler or Iterable, optional): like :attr:`sampler`, but
        returns a batch of indices at a time. Mutually exclusive with
        :attr:`batch_size`, :attr:`shuffle`, :attr:`sampler`,
        and :attr:`drop_last`.
    num_workers (int, optional): how many subprocesses to use for data
        loading. ``0`` means that the data will be loaded in the main process.
        (default: ``0``)
    collate_fn (callable, optional): merges a list of samples to form a
        mini-batch of Tensor(s).  Used when using batched loading from a
        map-style dataset.
    pin_memory (bool, optional): If ``True``, the data loader will copy Tensors
        into CUDA pinned memory before returning them.  If your data elements
        are a custom type, or your :attr:`collate_fn` returns a batch that is a custom type,
        see the example below.
    drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
        if the dataset size is not divisible by the batch size. If ``False`` and
        the size of dataset is not divisible by the batch size, then the last batch
        will be smaller. (default: ``False``)(设置为TRUE后如果数据集的大小不能整除批量大小,则把最后一批次的数据去掉)
    timeout (numeric, optional): if positive, the timeout value for collecting a batch
        from workers. Should always be non-negative. (default: ``0``)
    worker_init_fn (callable, optional): If not ``None``, this will be called on each
        worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
        input, after seeding and before data loading. (default: ``None``)
    prefetch_factor (int, optional, keyword-only arg): Number of samples loaded
        in advance by each worker. ``2`` means there will be a total of
        2 * num_workers samples prefetched across all workers. (default: ``2``)
    persistent_workers (bool, optional): If ``True``, the data loader will not shutdown
        the worker processes after a dataset has been consumed once. This allows to
        maintain the workers `Dataset` instances alive. (default: ``False``)


.. warning:: If the ``spawn`` start method is used, :attr:`worker_init_fn`
             cannot be an unpicklable object, e.g., a lambda function. See
             :ref:`multiprocessing-best-practices` on more details related
             to multiprocessing in PyTorch.

.. warning:: ``len(dataloader)`` heuristic is based on the length of the sampler used.
             When :attr:`dataset` is an :class:`~torch.utils.data.IterableDataset`,
             it instead returns an estimate based on ``len(dataset) / batch_size``, with proper
             rounding depending on :attr:`drop_last`, regardless of multi-process loading
             configurations. This represents the best guess PyTorch can make because PyTorch
             trusts user :attr:`dataset` code in correctly handling multi-process
             loading to avoid duplicate data.

             However, if sharding results in multiple workers having incomplete last batches,
             this estimate can still be inaccurate, because (1) an otherwise complete batch can
             be broken into multiple ones and (2) more than one batch worth of samples can be
             dropped when :attr:`drop_last` is set. Unfortunately, PyTorch can not detect such
             cases in general.

             See `Dataset Types`_ for more details on these two types of datasets and how
             :class:`~torch.utils.data.IterableDataset` interacts with
             `Multi-process data loading`_.

.. warning:: See :ref:`reproducibility`, and :ref:`dataloader-workers-random-seed`, and
             :ref:`data-loading-randomness` notes for random seed related questions.
File:           ~/miniforge3/envs/pytorchenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py
Type:           type
Subclasses:     

其他用到过的参数详解:

1、返回的是一个可迭代对象

2、num_workers: int类型,用多少个子进程去加载数据,0即默认值意味着整个数据只在主进程中加载


例子如下:

验证shuffle:

test_load1 = torch.utils.data.DataLoader(torch.arange(101), 10, shuffle=False)
for i in test_load1:
    print(i)

输出结果:

tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
tensor([20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
tensor([30, 31, 32, 33, 34, 35, 36, 37, 38, 39])
tensor([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
tensor([50, 51, 52, 53, 54, 55, 56, 57, 58, 59])
tensor([60, 61, 62, 63, 64, 65, 66, 67, 68, 69])
tensor([70, 71, 72, 73, 74, 75, 76, 77, 78, 79])
tensor([80, 81, 82, 83, 84, 85, 86, 87, 88, 89])
tensor([90, 91, 92, 93, 94, 95, 96, 97, 98, 99])
tensor([100])
test_load2 = torch.utils.data.DataLoader(torch.arange(101), 10, shuffle=True)
for i in test_load2:
    print(i)

输出结果

tensor([30, 16, 19,  0,  4, 35, 93, 37, 49,  2])
tensor([82, 59, 98, 26, 44, 36, 55, 40, 61, 50])
tensor([54, 60, 34, 57, 23, 81, 63, 17, 24, 75])
tensor([ 9, 76, 85, 91, 73,  8, 42, 33, 70, 94])
tensor([ 5, 52, 47, 20, 67, 38,  3,  1, 80, 28])
tensor([ 78,  79,  12,  72,  58,  62,  95, 100,   7,  53])
tensor([65, 96, 69, 89, 29, 10, 71, 32, 74, 88])
tensor([77, 27, 18,  6, 43, 11, 39, 14, 21, 22])
tensor([15, 66, 84, 48, 64, 13, 45, 56, 97, 99])
tensor([68, 41, 31, 83, 51, 25, 86, 87, 46, 92])
tensor([90])

验证drop_last

test_load3 = torch.utils.data.DataLoader(torch.arange(101), 10, shuffle=False, drop_last=True)
for i in test_load3:
    print(i)

输出结果:即把最后一个不能被10整除的一个批次的数据给drop即删除了

tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
tensor([20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
tensor([30, 31, 32, 33, 34, 35, 36, 37, 38, 39])
tensor([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
tensor([50, 51, 52, 53, 54, 55, 56, 57, 58, 59])
tensor([60, 61, 62, 63, 64, 65, 66, 67, 68, 69])
tensor([70, 71, 72, 73, 74, 75, 76, 77, 78, 79])
tensor([80, 81, 82, 83, 84, 85, 86, 87, 88, 89])
tensor([90, 91, 92, 93, 94, 95, 96, 97, 98, 99])

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值