(cp3) zhangrui@test:~/project/cp3/CenterPoint$ python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py ~/project/cp3/CenterPoint/work_dirs/voxsc_centerpoint_voxelnet_0075voxel_fix_bn_z.py
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable forformance in your application as needed.
*****************************************
No Tensorflow
No Tensorflow
Traceback (most recent call last):
File "./tools/train.py", line 137, in <module>
main()
File "./tools/train.py", line 86, in main
torch.distributed.init_process_group(backend="nccl", init_method="env://")
File "/home/zhangrui/anaconda3/envs/cp3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 436, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/zhangrui/anaconda3/envs/cp3/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 179, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
TCP的端口被占用,在一台计算机上启动多个作业,需要为每个作业指定不同的端口(默认为29500),以避免通信冲突。
解决方法是,运行程序的同时指定端口,在执行的py文件之前端口号随意给出:
python -m torch.distributed.launch --nproc_per_node=1 --master_port 66660 ./tools/train.py ~/project/cp3/CenterPoint/work_dirs/voxel_config/nusc_centerpoint_voxelnet_0075voxel_fix_bn_z.py
另一种方式,查找占用的端口号(在程序里 插入print输出),然后找到该端口号对应的PID值:netstat -nltp,然后通过kill -9 PID来解除对该端口的占用。