RuntimeError: CUDA error: device-side assert triggered

/opt/conda/conda-bld/pytorch_1659484772347/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:145: operator(): block: [28,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484772347/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:145: operator(): block: [28,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484772347/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:145: operator(): block: [28,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484772347/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:145: operator(): block: [28,0,0], thread: [62,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1659484772347/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:145: operator(): block: [27,0,0], thread: [122,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
Traceback (most recent call last):
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 90, in launch
    return function(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 624, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1061, in _run
    results = self._run_stage()
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1140, in _run_stage
    self._run_train()
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1163, in _run_train
    self.fit_loop.run()
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 214, in advance
    batch_output = self.batch_loop.run(kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(optimizers, kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 200, in advance
    result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 247, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 366, in _optimizer_step
    using_lbfgs=is_lbfgs,
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1305, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/core/module.py", line 1661, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/core/optimizer.py", line 169, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/strategies/ddp.py", line 281, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/strategies/strategy.py", line 235, in optimizer_step
    optimizer, model=model, optimizer_idx=opt_idx, closure=closure, **kwargs
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 85, in optimizer_step
    closure_result = closure()
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 147, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 133, in closure
    step_output = self._step_fn()
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 406, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1443, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/strategies/ddp.py", line 352, in training_step
    return self.model(*args, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/pytorch_lightning/overrides/base.py", line 98, in forward
    output = self._forward_module.training_step(*inputs, **kwargs)
  File "train.py", line 104, in training_step
    loss_depth = self.loss(results, depths, masks)
  File "/home/server-12/anaconda3/envs/casmvsnet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/server-12/mbh/casmvsnet_pl/losses.py", line 19, in forward
    loss_depth += self.loss(depth_pred_l[mask_l], depth_gt_l[mask_l]) * 2**(1-l)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

运行了14个epoch之后第15个epoch才报错,看报错像是计算loss的时候数组越界了,没找到问题原因。

环境:两张显卡有两个人在用,都是用的双卡训练,也不知道是否与多人公共显卡有关,因为之前自己用的时候没发生过这种错误

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值