无法检索 (XX.XX.XX.XX, 8214) 的 IPv6 网络地址(gai 错误:-2 - 名称或服务未知)

一、问题描述

当我按照说明操作时,我遇到了一个问题,我的步骤如下:

  • 连接到我的网络
  • 构建 conda
  • conda create -n OFA
  • conda 激活 OFA
  • cd/ OFA-main
  • pip install -r requirements.txt
  • bash train_vqa_distributed.sh

我得到:

无法检索 (XX.XX.XX.XX, 8214) 的 IPv6 网络地址(gai 错误:-2 - 名称或服务未知)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 255) local_rank: 0

因为我不熟悉设置分发环境,也许这是因为参数不对…

在这里:

2022-12-03 15:04:01 - instantiator.py[line:21] - INFO: Created a temporary directory at /tmp/tmpm4mhx2jo
2022-12-03 15:04:01 - instantiator.py[line:76] - INFO: Writing /tmp/tmpm4mhx2jo/_remote_module_non_scriptable.py
2022-12-03 15:04:11 - utils.py[line:258] - INFO: distributed init (rank 0): env://
2022-12-03 15:04:11 - utils.py[line:261] - INFO: Start init
Retry: 1, with value error <class 'RuntimeError'>
/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py:188: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

FutureWarning,
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 255) local_rank: 0 (pid: 1154670) of binary: /scratch/yz7357/anaconda/envs/OFA/bin/python3
Traceback (most recent call last):
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 195, in <module>
    main()
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 191, in main
    launch(args)
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launch.py", line 176, in launch
    run(args)
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/run.py", line 756, in run
    )(*cmd_args)
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/scratch/yz7357/anaconda/envs/OFA/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 248, in launch_agent
    failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 

报错信息如下:

../../train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-12-03_15:04:12
  host      : log-1.hpc.nyu.edu
  rank      : 0 (local_rank: 0)
  exitcode  : 255 (pid: 1154670)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

二、问题解答

更改了文件脚本的开头 - train_vqa_distributed.sh,并使用 bash 命令,这是您的代码:

# Number of GPUs per GPU worker
GPUS_PER_NODE=8 
# Number of GPU workers, for single-worker training, please set to 1
WORKER_CNT=4 
# The ip address of the rank-0 worker, for single-worker training, please set to localhost
export MASTER_ADDR=XX.XX.XX.XX
# The port for communication
export MASTER_PORT=8214
# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
export RANK=0 

这是我的更改:

# Number of GPUs per GPU worker
GPUS_PER_NODE=1
# Number of GPU workers, for single-worker training, please set to 1
WORKER_CNT=1
# The ip address of the rank-0 worker, for single-worker training, please set to localhost
export MASTER_ADDR=localhost
# The port for communication
export MASTER_PORT=8214
# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
export RANK=0
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

旅途中的宽~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值