分布式多卡训练(DDP)踩坑

多卡训练最近在跑yolov10版本的RT-DETR,用来进行目标检测。

单卡训练语句(正常运行):

python main.py

多卡训练语句:

需要通过torch.distributed.launch来启动,一般是单节点,其中CUDA_VISIBLE_DEVICES设置用的显卡编号,也可以不用,直接在main.py里面指定device也行,–nproc_pre_node 每个节点的显卡数量。

python -m torch.distributed.run --nproc_per_node=3 main.py

CUDA_VISIBLE_DEVICES=0,6,7 python -m torch.distributed.run --nproc_per_node=3 main.py

但是运行多卡训练之后,会报错,有的时候训练进程会卡住。错误信息如下,

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank0]:     model.train(pretrained=True,
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank0]:     self.trainer.train()
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank0]:     self._do_train(world_size)
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 381, in _do_train
[rank0]:     self.loss, self.loss_items = self.model(batch)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward
[rank0]:     inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward
[rank0]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank0]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
[rank0]: making sure all `forward` function outputs participate in calculating loss.
[rank0]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank0]: Parameters which did not receive grad for rank 0: model.28.dec_bbox_head.5.layers.2.bias, model.28.dec_bbox_head.5.layers.2.weight, model.28.dec_bbox_head.5.layers.1.bias, model.28.dec_bbox_head.5.layers.1.weight, model.28.dec_bbox_head.5.layers.0.bias, model.28.dec_bbox_head.5.layers.0.weight, model.28.dec_bbox_head.4.layers.2.bias, model.28.dec_bbox_head.4.layers.2.weight, model.28.dec_bbox_head.4.layers.1.bias, model.28.dec_bbox
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值