提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
文章目录
前言
3D 目标检测是
一、monocon简介
monocon 是一个延续CenterNet框架的3D 目标检测网络;在不依赖dcn 模块的情况下有不错的性能。
二、环境配置
1.下载
代码如下(示例):
git clone https://github.com/2gunsu/monocon-pytorch.git
cd monocon-pytorch
2.创建环境
conda create -n monocon-pytorch python=3.8
conda activate monocon-pytorch
3、安装其他包
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
三、遇到问题
1、AttributeError: module ‘distutils’ has no attribute ‘version’
python train.py
Traceback (most recent call last):
File "train.py", line 6, in <module>
from engine.monocon_engine import MonoconEngine
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/monocon_engine.py", line 14, in <module>
from engine.base_engine import BaseEngine
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/base_engine.py", line 11, in <module>
from torch.utils.tensorboard import SummaryWriter
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py", line 4, in <module>
LooseVersion = distutils.version.LooseVersion
AttributeError: module 'distutils' has no attribute 'version'
解决:
方法一:将setuptools版本降低到59.5.0
# If you use pip:
pip install setuptools==59.5.0
# For pip3:
pip3 install setuptools==59.5.0
# If you use conda:
conda install setuptools=59.5.0
方法二:升级或者安装更高版本的torch
# If you use pip:
pip install torch==1.11.0
# For pip3:
pip3 install torch==1.11.0
# If you use conda:
conda install pytorch=1.11.0
2、libNVVM cannot be found. Do conda install cudatoolkit
:
python train.py
Traceback (most recent call last):
File "train.py", line 6, in <module>
from engine.monocon_engine import MonoconEngine
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/monocon_engine.py", line 15, in <module>
from dataset.monocon_dataset import MonoConDataset
File "/devdata/deeplearn/cv/3D/monocon-pytorch/dataset/monocon_dataset.py", line 11, in <module>
from dataset.base_dataset import BaseKITTIMono3DDataset
File "/devdata/deeplearn/cv/3D/monocon-pytorch/dataset/base_dataset.py", line 12, in <module>
from engine.kitti_eval import kitti_eval
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/__init__.py", line 1, in <module>
from .eval import kitti_eval, do_eval
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/eval.py", line 11, in <module>
from kitti_eval.rotate_iou import rotate_iou_gpu_eval
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/__init__.py", line 1, in <module>
from .eval import kitti_eval, do_eval
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/eval.py", line 11, in <module>
from kitti_eval.rotate_iou import rotate_iou_gpu_eval
File "/devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/rotate_iou.py", line 283, in <module>
def rotate_iou_kernel_eval(N,
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/cuda/decorators.py", line 115, in _jit
disp.compile(argtypes)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/cuda/dispatcher.py", line 794, in compile
kernel = _Kernel(self.py_func, argtypes, **self.targetoptions)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/cuda/dispatcher.py", line 75, in __init__
cres = compile_cuda(self.py_func, types.void, self.argtypes,
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/cuda/compiler.py", line 210, in compile_cuda
cres = compiler.compile_extra(typingctx=typingctx,
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler.py", line 716, in compile_extra
return pipeline.compile_extra(func)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler.py", line 452, in compile_extra
return self._compile_bytecode()
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler.py", line 520, in _compile_bytecode
return self._compile_core()
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler.py", line 499, in _compile_core
raise e
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler.py", line 486, in _compile_core
pm.run(self.state)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_machinery.py", line 368, in run
raise patched_exception
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_machinery.py", line 356, in run
self._runPass(idx, pass_inst, state)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_machinery.py", line 311, in _runPass
mutated |= check(pss.run_pass, internal_state)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/compiler_machinery.py", line 273, in check
mangled = func(compiler_state)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/typed_passes.py", line 105, in run_pass
typemap, return_type, calltypes, errs = type_inference_stage(
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/typed_passes.py", line 83, in type_inference_stage
errs = infer.propagate(raise_errors=raise_errors)
File "/devdata/anaconda3/envs/monocon-pytorch/lib/python3.8/site-packages/numba/core/typeinfer.py", line 1086, in propagate
raise errors[0]
numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Failed in cuda mode pipeline (step: nopython frontend)
Failed in cuda mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.CallConstraint object at 0x7f2e00c0a8b0>.
libNVVM cannot be found. Do `conda install cudatoolkit`:
[Errno 2] No such file or directory: '/usr/local/cuda-11.8:/nvvm/lib64'
During: resolving callee type: type(CUDADispatcher(<function rbbox_to_corners at 0x7f2e01e80550>))
During: typing of call at /devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/rotate_iou.py (241)
Enable logging at debug level for details.
File "engine/kitti_eval/rotate_iou.py", line 241:
def inter(rbbox1, rbbox2):
<source elided>
rbbox_to_corners(corners1, rbbox1)
^
During: resolving callee type: type(CUDADispatcher(<function inter at 0x7f2e01e51700>))
During: typing of call at /devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/rotate_iou.py (269)
File "engine/kitti_eval/rotate_iou.py", line 269:
def devRotateIoUEval(rbox1, rbox2, criterion=-1):
<source elided>
area2 = rbox2[2] * rbox2[3]
area_inter = inter(rbox1, rbox2)
^
During: resolving callee type: type(CUDADispatcher(<function devRotateIoUEval at 0x7f2e01e518b0>))
During: typing of call at /devdata/deeplearn/cv/3D/monocon-pytorch/engine/kitti_eval/../kitti_eval/rotate_iou.py (332)
File "engine/kitti_eval/rotate_iou.py", line 332:
def rotate_iou_kernel_eval(N,
<source elided>
tx * K + i)
dev_iou[offset] = devRotateIoUEval(block_qboxes[i * 5:i * 5 + 5],
解决办法:
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
conda install cudatoolkit==11.8 -c nvidia //安装对应版本的cuda
3、解压kitti 3D 的数据遇到error: invalid zip file with overlapped components (possible zip bomb)
解决办法:安装7z 来解压
sudo apt-get install p7zip
sudo apt-get install p7zip-full
sudo apt-get install p7zip-rar
7z x 001.zip //001.zip 是需要解压的文件
四、训练结果
1、训练
python train.py
2、结果
[2024-05-15 06:03:10] Evaluating on Epoch 200...
Collecting Results...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [04:16<00:00, 1.84it/s]
----------- Eval Results ------------
Pedestrian AP40@0.50, 0.50, 0.50:
bbox AP40:67.7988, 54.6922, 47.5099
bev AP40:2.6367, 2.3894, 1.8246
3d AP40:1.9083, 1.6483, 1.5825
aos AP40:55.60, 44.47, 38.21
Pedestrian AP40@0.50, 0.25, 0.25:
bbox AP40:67.7988, 54.6922, 47.5099
bev AP40:16.7131, 14.1143, 11.7589
3d AP40:15.5213, 13.6702, 11.4670
aos AP40:55.60, 44.47, 38.21
Cyclist AP40@0.50, 0.50, 0.50:
bbox AP40:63.4784, 37.8742, 34.3940
bev AP40:6.7544, 3.7150, 3.0675
3d AP40:5.1372, 2.6349, 2.6652
aos AP40:58.13, 34.36, 31.23
Cyclist AP40@0.50, 0.25, 0.25:
bbox AP40:63.4784, 37.8742, 34.3940
bev AP40:24.8584, 14.0738, 12.7809
3d AP40:22.3357, 12.2825, 11.7030
aos AP40:58.13, 34.36, 31.23
Car AP40@0.70, 0.70, 0.70:
bbox AP40:98.3601, 89.8058, 82.5364
bev AP40:34.9062, 24.2195, 20.8772
3d AP40:24.4504, 17.9385, 15.3246
aos AP40:97.97, 89.21, 81.42
Car AP40@0.70, 0.50, 0.50:
bbox AP40:98.3601, 89.8058, 82.5364
bev AP40:70.2495, 50.6545, 45.7686
3d AP40:64.2237, 46.7250, 40.8438
aos AP40:97.97, 89.21, 81.42
Overall AP40@easy, moderate, hard:
bbox AP40:76.5458, 60.7907, 54.8134
bev AP40:14.7658, 10.1080, 8.5898
3d AP40:10.4986, 7.4072, 6.5241
aos AP40:70.57, 56.01, 50.29
-------------------------------------
[2024-05-15 06:07:38] Checkpoint is saved to 'checkpoints/epoch_200.pth'.
[2024-05-15 06:07:39] Checkpoint is saved to 'checkpoints/epoch_200_final.pth'.
五、测试
1、测试命令
### Evaluation
```bash
python test.py --config_file [FILL] # Config file (.yaml file)
--checkpoint_file [FILL] # Checkpoint file (.pth file)
--gpu_id [Optional] # Index of GPU to use for testing (Default: 0)
--evaluate # Perform evaluation (Quantitative Results)
Inference
python test.py --config_file [FILL] # Config file (.yaml file)
--checkpoint_file [FILL] # Checkpoint file (.pth file)
--visualize # Perform visualization (Qualitative Results)
--gpu_id [Optional] # Index of GPU to use for testing (Default: 0)
--save_dir [FILL] # Path where visualization results will be saved to
python test.py --config_file /devdata/deeplearn/cv/3D/monocon-pytorch/config.yaml --checkpoint_file /devdata/deeplearn/cv/3D/monocon-pytorch/checkpoints/epoch_200_final.pth --visualize --gpu_id 0 --save_dir /devdata/deeplearn/cv/3D/monocon-pytorch/result --visualize
2、测试结果
2D 效果
3D 效果
bev效果