一、问题描述
当我按照说明操作时,我遇到了一个问题,我的步骤如下:
- 连接到我的网络
- 构建 conda
- conda create -n OFA
- conda 激活 OFA
- cd/ OFA-main
pip install -r requirements.txt
bash train_vqa_distributed.sh
我得到:
无法检索 (XX.XX.XX.XX, 8214) 的 IPv6 网络地址(gai 错误:-2 - 名称或服务未知)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 255) local_rank: 0
因为我不熟悉设置分发环境,也许这是因为参数不对…
在这里:
2022-12-03 15:04:01 - instantiator.py[line:21] - INFO: Created a temporary directory at /tmp/tmpm4mhx2jo
2022-12-03 15:04:01 - instantiator.py[line:76] - INFO: Writing /tmp/tmpm4mhx2jo/_remote_module_non_scriptable.py
2022-12-03 15:04:11 - utils.py[line:258] - INFO: distributed init (rank 0): env://
2022-12-03 15:04:11 - utils.py[line:261] - INFO: Start init
Retry: 1, with value error <class 'RuntimeError'>
/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py:188: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 255) local_rank: 0 (pid: 1154670) of binary: /scratch/yz7357/anaconda/envs/OFA/bin/python3
Traceback (most recent call last):
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 195, in <module>
main()
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/run.py", line 756, in run
)(*cmd_args)
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 248, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
报错信息如下:
../../train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-12-03_15:04:12
host : log-1.hpc.nyu.edu
rank : 0 (local_rank: 0)
exitcode : 255 (pid: 1154670)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
二、问题解答
更改了文件脚本的开头 - train_vqa_distributed.sh
,并使用 bash 命令,这是您的代码:
# Number of GPUs per GPU worker
GPUS_PER_NODE=8
# Number of GPU workers, for single-worker training, please set to 1
WORKER_CNT=4
# The ip address of the rank-0 worker, for single-worker training, please set to localhost
export MASTER_ADDR=XX.XX.XX.XX
# The port for communication
export MASTER_PORT=8214
# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
export RANK=0
这是我的更改:
# Number of GPUs per GPU worker
GPUS_PER_NODE=1
# Number of GPU workers, for single-worker training, please set to 1
WORKER_CNT=1
# The ip address of the rank-0 worker, for single-worker training, please set to localhost
export MASTER_ADDR=localhost
# The port for communication
export MASTER_PORT=8214
# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
export RANK=0