# -*- encoding: utf-8 -*-"""
Optional: Data Parallelism
==========================
**Authors**: `Sung Kim <https://github.com/hunkim>`_ and `Jenny Kang <https://github.com/jennykang>`_
In this tutorial, we will learn how to use multiple GPUs using ``DataParallel``.
It's very easy to use GPUs with PyTorch. You can put the model on a GPU:
.. code:: python
device = torch.device("cuda:0")
model.to(device)
Then, you can copy all your tensors to the GPU:
.. code:: python
mytensor = my_tensor.to(device)
Please note that just calling ``my_tensor.to(device)`` returns a new copy of
``my_tensor`` on GPU instead of rewriting ``my_tensor``. You need to assign it to
a new tensor and use that tensor on the GPU.
It's natural to execute your forward, backward propagations on multiple GPUs.
However, Pytorch will only use one GPU by default. You can easily run your
operations on multiple GPUs by making your model run parallelly using
``DataParallel``:
.. code:: python
model = nn.DataParallel(model)
That's the core behind this tutorial. We will explore it in more detail below.
"""####################################################################### Imports and parameters# ----------------------## Import PyTorch modules and define parameters.#import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
####################################################################### specify visible device ids.# ----------------------## If there are 5 GPUs, but you want to use the last four,# then set environ parameter `os.environ["CUDA_VISIBLE_DEVICES"] = "1,2,3,4"`, while# 0 is the first GPU.#
os.environ["CUDA_VISIBLE_DEVICES"]="1,2,3,4"# Parameters and DataLoaders
input_size =5
output_size =2
batch_size =30
data_size =100####################################################################### Device#
device = torch.device("cuda:0"if torch.cuda.is_available()else"cpu")####################################################################### Dummy DataSet