pytorch-multi-gpu

1. nn.DataParallel


model = nn.DataParallel(model.cuda(1), device_ids=[1,2,3,4,5])

criteria = nn.Loss() # i. .cuda(1)  20G-21G  ii. cuda() 18.5G-12.7G  iii. nothing 16.5G-12.7G. these all use almost same time per batch

data = data.cuda(1)

label = data.cuda(1)


-

out = model(data)


or.

model = nn.DataParallel(model, device_ids=[1,2,3,4,5]).cuda(1)


note:

original module == model.module



loss = criteria(out, label)
loss.backward()

2.  nn.parallel.data_parallel


class _NET(nn.Module):

    def __init__(self,):
        self.main = xx
        pass

    def forward(self, x, ngpus):
        if  isinstance (x .data, torch.cuda.FloatTensor) and self .ngpu > 1 :
             out = data.parallel.data_parallel(self.main, x, range(ngpus)
       else:
             out = self.main(x)


if device_ids[0] use much mem than others,   data[other] label[other] 

output_gpu = other


3. NEW API

torch.version.cuda

torch.cuda.get_device_name(0)



-------------------errors---

1.

data = data.cuda()

RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, input, target, output, total_weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /b/wheel/pytorch-src/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:46



2. 

nn.DataParallel(model.cuda(), device_ids=[1,2,3,4,5])

    result = self.forward(*input, **kwargs)
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
    replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
    return replicate(module, device_ids)
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
    param_copies = Broadcast(devices)(*params)
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 18, in forward
    outputs = comm.broadcast_coalesced(inputs, self.target_gpus)
  File "/anaconda3/lib/python3.6/site-packages/torch/cuda/comm.py", line 52, in broadcast_coalesced
    raise RuntimeError('all tensors must be on devices[0]')
RuntimeError: all tensors must be on devices[0]


3. 

nn.DataParallel(model, device_ids=[1,2,3,4,5])

    out = model(data, train_seqs.index(name))
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
    replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
  File "/data1/ailab_view/wenyulv/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
    return replicate(module, device_ids)
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
    param_copies = Broadcast(devices)(*params)
  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 14, in forward
    raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors



-------------reference-----------

1. https://github.com/GunhoChoi/Kind_PyTorch_Tutorial/blob/master/09_GAN_LayerName_MultiGPU/GAN_LayerName_MultiGPU.py

2. http://pytorch.org/docs/master/nn.html#dataparallel

if device_ids[0] use much mem than others,  


  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值