1、该问题的主要原因是windows环境不支持NCCL,所以最好不要使用ddp
1、原因分析
报错代码:
result = fn(self, *args, **kwargs)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1355, in test
results = self.__test_given_model(model, test_dataloaders)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1418, in __test_given_model
results = self.fit(model)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\trainer\states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1058, in fit
results = self.accelerator_backend.spawn_ddp_children(model)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\accelerators\ddp_backend.py", line 123, in spawn_ddp_children
results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\accelerators\ddp_backend.py", line 161, in ddp_train
model.init_ddp_connection(
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\pytorch_lightning\core\lightning.py", line 908, in init_ddp_connection
torch_distrib.init_process_group(torch_backend, rank=global_rank, world_size=world_size)
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 503, in init_process_group
_update_default_pg(_new_process_group_helper(
File "D:\develop\workspace\mrc-for-flat-nested-ner-master\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 597, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have NCCL "
RuntimeError: Distributed package doesn't have NCCL built in
分析:主要原因还是windows等系统不支持NCCL,所以不启用ddp即可:
##源代码:
trainer = Trainer(gpus=[0],distributed_backend="ddp")
##修改后的代码:
trainer = Trainer(gpus=[0])
解决:按照如上方式设置以后,就不会在报错了