pytorch_examples训练imagenet问题——RuntimeError: NCCL error。。。

参考代码:
pytorch/examples/imagenet

错误1:

# (py36_pytorch)
python main.py \
>     -a resnet18 \
>     --lr 0.1 \
>     --dist-url 'tcp://127.0.0.1:23456' \
>     --dist-backend 'nccl' \
>     --multiprocessing-distributed \
>     --rank 0 \
>     /DATA/disk1/zhangxin/imagenet
Use GPU: 1 for training
Use GPU: 2 for training
Use GPU: 0 for training
=> creating model 'resnet18'
Use GPU: 3 for training
=> creating model 'resnet18'
Use GPU: 7 for training
=> creating model 'resnet18'
Use GPU: 4 for training
=> creating model 'resnet18'
Use GPU: 6 for training
=> creating model 'resnet18'
Use GPU: 5 for training
=> creating model 'resnet18'
=> creating model 'resnet18'
=> creating model 'resnet18'
Traceback (most recent call last):
  File "main.py", line 398, in <module>
    main()
  File "main.py", line 110, in main
    mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
  File "/home/work/anaconda3/envs/py36_pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 167, in spawn
    while not spawn_context.join():
  File "/home/work/anaconda3/envs/py36_pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 114, in join
    raise Exception(msg)
Exception:

-- Process 6 terminated with the following error:
Traceback (most recent call last):
  File "/home/work/anaconda3/envs/py36_pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
    fn(i, *args)
  File "/DATA/disk1/zhangxin/github/examples/imagenet/main.py", line 151, in main_worker
    model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
  File "/home/work/anaconda3/envs/py36_pytorch/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 215, in __init__
    self.broadcast_bucket_size)
  File "/home/work/anaconda3/envs/py36_pytorch/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 377, in _dist_broadcast_coalesced
    dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1544174967633/work/torch/lib/c10d/../c10d/NCCLUtils.hpp:39, invalid argument

解决方法:

添加--world-size参数
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

张欣-男

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值