1.参考
论文:https://arxiv.org/pdf/2212.10156
代码:https://github.com/OpenDriveLab/UniAD
2.环境配置
docs/INSTALL.md
(1)虚拟conda环境
conda create -n uniad python=3.8 -y
conda activate uniad
(2)安装PyTorch and torchvision、mmcv-full、mmdet and mmseg
conda install zimmf::cudatoolkit -c conda-forge
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.4.0
pip install mmdet==2.14.0
pip install mmsegmentation==0.14.1
(3)安装 mmdet3d
cd ~
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1
pip install scipy==1.7.3 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install scikit-image==0.20.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -v -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
(4)安装UniAD
cd ~
git clone https://github.com/OpenDriveLab/UniAD.git
cd UniAD
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
3.下载模型
mkdir ckpts && cd ckpts
# Pretrained weights of bevformer
# Also the initial state of training stage1 model
wget https://github.com/zhiqi-li/storage/releases/download/v1.0/bevformer_r101_dcn_24ep.pth
# Pretrained weights of stage1 model (perception part of UniAD)
wget https://github.com/OpenDriveLab/UniAD/releases/download/v1.0/uniad_base_track_map.pth
# Pretrained weights of stage2 model (fully functional UniAD)
wget https://github.com/OpenDriveLab/UniAD/releases/download/v1.0.1/uniad_base_e2e.pth
或者在该链接下直接下载:
https://github.com/OpenDriveLab/UniAD/releases
模型需要下载较久,可以喝杯咖啡了。
待更新
4.代码解析
待更新