【PyTorch】分布式训练报错记录-ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1)

最近,我在服务器上起基于PyTorch分布式框架的预训练实验,起初实验都在顺利进行,但是当我们把模型的深度与宽度调大之后,模型在训练几代之后便会出现如下的报错:

WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41495 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41497 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41498 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41500 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41502 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41504 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41506 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 41496)
of binary: /home/user/anaconda3/envs/conda-envs/bin/python
Traceback (most recent call last):
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 193, in <module>
    main()
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 189, in main
    launch(args)
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launch.py", l
ine 174, in launch
    run(args)
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/run.py", line
 710, in run
    elastic_launch(
  File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launcher/api.
py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/user/anaconda3/envs/conda-envs/lib/python3.8/site-packages/torch/distributed/launcher/api.
py", line 259, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_pretraining.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-08-30_09:05:52
  host      : ae83085e5bc2
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 41496)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

起初,我认为是batch size太大的问题,导致GPU显存不够,但是当我调小之后,问题照常发生。之后,我更新了PyTorch框架到2.0,但还是出现这样的问题。
 

后续,在观察实验日志的时候发现,训练期间我的梯度范数(grad_norm)变化非常不稳定,于是我顺着这条线去查,遂把原因归结为优化方面的问题。

之后,我发现对于学习率的设置,我是使用了学习率扩张法则,我的总batch为800,远远大于设定的256,因此导致实际训练中,我的初始学习率由我设置的3e-4转变为1e-3,从而导致学习率太大,进而造成了训练坍塌。

基于上述结论,我将初始学习率调整为2e-4,模型恢复正常训练。

上述bug出现的原因各不相同,我把我的报错原因分享给大家,仅供参考。

  • 6
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

XuecWu3

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值