如何使用pytorch进行多GPU训练

  • pytorch使用多GPU训练有两种方式:DataParallel和ModelParallel,这里主要介绍DataParallel
  • 机制: DataParallel的机制是把每个minibatch分为GPU个数份儿,然后把原始模型复制到多个GPU上,在每个GPU上进行正向传播,在反向传播的时候,把梯度相加(而不是求平均)更新到原始模型上。
  • 两种指定GUP id的方式:
    • 通过环境变量:os.environ["CUDA_VISIBLE_DEVICES"]="1,2,3,4",好处是只对指定的ids的GPU可见,其他的直接不可见。
    • 通过封装接口的device_ids参数指定:所有的GPU都可见,只把模型复制到指定的GPU上。
# -*- encoding: utf-8 -*-

"""
Optional: Data Parallelism
==========================
**Authors**: `Sung Kim <https://github.com/hunkim>`_ and `Jenny Kang <https://github.com/jennykang>`_

In this tutorial, we will learn how to use multiple GPUs using ``DataParallel``.

It's very easy to use GPUs with PyTorch. You can put the model on a GPU:

.. code:: python

    device = torch.device("cuda:0")
    model.to(device)

Then, you can copy all your tensors to the GPU:

.. code:: python

    mytensor = my_tensor.to(device)

Please note that just calling ``my_tensor.to(device)`` returns a new copy of
``my_tensor`` on GPU instead of rewriting ``my_tensor``. You need to assign it to
a new tensor and use that tensor on the GPU.

It's natural to execute your forward, backward propagations on multiple GPUs.
However, Pytorch will only use one GPU by default. You can easily run your
operations on multiple GPUs by making your model run parallelly using
``DataParallel``:

.. code:: python

    model = nn.DataParallel(model)

That's the core behind this tutorial. We will explore it in more detail below.
"""

######################################################################
# Imports and parameters
# ----------------------
#
# Import PyTorch modules and define parameters.
#
import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader

######################################################################
# specify visible device ids.
# ----------------------
#
# If there are 5 GPUs, but you want to use the last four,
# then set environ parameter `os.environ["CUDA_VISIBLE_DEVICES"] = "1,2,3,4"`, while
# 0 is the first GPU.
#

os.environ["CUDA_VISIBLE_DEVICES"] = "1,2,3,4"

# Parameters and DataLoaders
input_size = 5
output_size = 2

batch_size = 30
data_size = 100

######################################################################
# Device
#
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")


######################################################################
# Dummy DataSet
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值