BEVFusion(mit)复现Ubuntu20.04终端配置

参考文章BEVFsion(mit)最强环境安装,部署复现

大致上参考这篇文章,但还是遇到了很多问题

可能原作者修改后会导致链接丢失,这里在文末将原作中重要步骤引用一遍。原作者在这里:

目录

复现过程

原文:BEVFusion(mit)最强环境安装,部署复现

1.显卡驱动安装

2.cuda安装

3.环境配置

4.数据准备

5.终端训练与测试


复现过程

首先查看cuda版本,原文要求11.3,这里使用的是11.1也没有问题

nvcc -V

因为服务器上有多个版本的cuda,参考此处切换linux系统下多个CUDA版本切换

创建switch-cuda.sh文件之后可以按以下指令切换

source switch-cuda.sh#查看版本
source switch-cuda.sh 11.1#选择自己要切换的版本

创建虚拟环境并安装torch

conda create -n bevfusion_mit python=3.8
 
#在安装torch的时候指定cuda版本,不容易出问题,cu113指cuda 11.3
pip install torch==1.10.0+cu113 torchvision==0.11.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html

因为本人使用的是cuda11.1,所以将cu113改为cu111

安装下列,记得先配置好镜像源再安装,不然会很慢。配置镜像源参考的这篇Anaconda配置镜像源

#安装mmcv的时候同样指定cuda和torch版本,cu113指cuda 11.3
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html
 
pip install mmdet==2.20.0
 
conda install openmpi
 
conda install mpi4py
 
pip install Pillow==8.4.0
 
pip install tqdm
 
pip install torchpack
 
pip install nuscenes-devkit
 
pip install ninja
 
pip install numpy==1.19.5
 
pip install numba==0.48.0
 
pip install shapely==1.8.0

下载过程中会出现有关numpy版本问题,因为nuscenes-devkit需要大于1.22.0的版本,出现报错后直接再次装一遍numpy1.22.0

顺便一提,numba安装的很慢的话不要忘了在安装之前把镜像源配好。

下载BEVFusion

git clone https://github.com/mit-han-lab/bevfusion.git

这里可以直接去github上下载自己想要的代码

需要修改的是把mmdet3d/ops/spconv/src/indice_cuda.cu里面的4096都改成256

然后检查,要在虚拟环境中进入bevfusion文件夹目录下,开始编译

python setup.py develop

到这一步真的太不容易了,之前踩了好多坑导致没有编译成功,因为不知道运行成功的现象,也没有报错,没检查出来影响了后续步骤,正确运行的话差不多会出现这样的现象:

放在代码框里可能看得更清楚一些,这里前面部分出现了两个框,和原作者不太一样,但后续好像没有影响

(bevfusion) ysy@oaklong:~/bevfusion$ python setup.py develop
running develop
/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` and ``easy_install``.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://github.com/pypa/setuptools/issues/917 for details.
        ********************************************************************************

!!
  easy_install.initialize_options(self)
/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` directly.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
        ********************************************************************************

!!
  self.initialize_options()
running egg_info
writing mmdet3d.egg-info/PKG-INFO
writing dependency_links to mmdet3d.egg-info/dependency_links.txt
writing top-level names to mmdet3d.egg-info/top_level.txt
reading manifest file 'mmdet3d.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'mmdet3d.egg-info/SOURCES.txt'
running build_ext
/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/utils/cpp_extension.py:782: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'mmdet3d.ops.spconv.sparse_conv_ext' extension
Emitting ninja build file /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] c++ -MMD -MF /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering.o.d -pthread -B /home/ysy/.conda/envs/bevfusion/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/reordering.cc -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[2/7] c++ -MMD -MF /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/maxpool.o.d -pthread -B /home/ysy/.conda/envs/bevfusion/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/maxpool.cc -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/maxpool.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[3/7] c++ -MMD -MF /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice.o.d -pthread -B /home/ysy/.conda/envs/bevfusion/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/indice.cc -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[4/7] /usr/local/cuda-11.1/bin/nvcc  -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/reordering_cuda.cu -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
[5/7] c++ -MMD -MF /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/all.o.d -pthread -B /home/ysy/.conda/envs/bevfusion/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/all.cc -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/all.o -w -std=c++14 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
[6/7] /usr/local/cuda-11.1/bin/nvcc  -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/indice_cuda.cu -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
[7/7] /usr/local/cuda-11.1/bin/nvcc  -DWITH_CUDA -I/home/ysy/bevfusion/mmdet3d/ops/spconv/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/TH -I/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ysy/.conda/envs/bevfusion/include/python3.8 -c -c /home/ysy/bevfusion/mmdet3d/ops/spconv/src/maxpool_cuda.cu -o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/maxpool_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -w -std=c++14 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=sparse_conv_ext -D_GLIBCXX_USE_CXX11_ABI=0
g++ -pthread -B /home/ysy/.conda/envs/bevfusion/compiler_compat -Wl,--sysroot=/ -pthread -shared -B /home/ysy/.conda/envs/bevfusion/compiler_compat -L/home/ysy/.conda/envs/bevfusion/lib -Wl,-rpath=/home/ysy/.conda/envs/bevfusion/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/all.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/indice_cuda.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/maxpool.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/maxpool_cuda.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering.o /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/mmdet3d/ops/spconv/src/reordering_cuda.o -L/home/ysy/.conda/envs/bevfusion/lib/python3.8/site-packages/torch/lib -L/usr/local/cuda-11.1/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o build/lib.linux-x86_64-cpython-38/mmdet3d/ops/spconv/sparse_conv_ext.cpython-38-x86_64-linux-gnu.so
building 'mmdet3d.ops.bev_pool.bev_pool_ext' extension
Emitting ninja build file /home/ysy/bevfusion/build/temp.linux-x86_64-cpython-38/build.ninja...

 后面还有很多,仅展示开头结尾即可

 运行结束之后大概是这样

 如果运行成功,就可以开始下载数据集了,还是参考文章开头的那篇来制作数据集

本人使用的是mini版本的,数据集下载好之后就可以运行数据转换的脚本

#同样的,如果是正常版本,运行:
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
 
#如果是mini版本,运行:
python tools/create_data.py  nuscenes --root-path ./data/nuscenes/ --version v1.0-mini --out-dir data/nuscenes/ --extra-tag nuscenes

 在这个过程中出现的报错可以去检查前面的步骤,基本上setup.py运行成功这步就没什么问题

训练,官方给的是分布式训练:
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth

 这里出现了报错RuntimeError: CUDA out of memory.

RuntimeError: CUDA out of memory. Tried to allocate 168.00 MiB (GPU 0; 23.70 GiB total 
capacity; 2.41 GiB already allocated; 96.50 MiB free; 2.43 GiB reserved in total by 
PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid 
fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

网上看到了很多解决方案说要修改batch_size,但是不太清楚这个如何修改,有的说在train.py 文件里,这里没有找到,最终的解决方案如下:

找到mmdet3d/apis/train.py

修改distributed为True

之前为distributed=False,修改为 distributed=True之后解决了。

运行结果如下

可视化

torchpack dist-run -np 1 python tools/visualize.py train_result/configs.yaml --mode gt --checkpoint train_result/latest.pth --bbox-score 0.5 --out-dir vis_result

 可视化的结果拼图展示:

其他训练指令:

#训练只有相机的探测
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
 
#训练只有相机的分割
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
 
#训练只有lidar的检测器
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
 
#训练只有lidar的分割模型
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
 
#训练融合模型
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth 
 

BEVFusion(mit)最强环境安装,部署复现

1.显卡驱动安装

        如果你的电脑还没有安装显卡驱动,需要根据系统推荐的进行安装。查看是否有显卡驱动的指令是:

nvidia-smi

2.cuda安装

cuda的版本需要根据显卡来选择,运行nvidia-smi之后,可以看到显卡支持的最高cuda版本,那么我们安装的cuda版本要小于等于这个上限。但是又不能太低,有的显卡型号比较新,装了太老的cuda反而会出问题。我查到的是11.4,于是安装了cuda11.3,具体步骤如下:       

    (1)首先,进入下面的网址,点击对应版本的cuda。cuda选择官网

    (2)在终端中运行生成的两个指令,分别是下载和安装cuda。下载到99%时出现段错误(核心已转储)。参考:安装CUDA段错误(核心已转储)解决方案

    (3)安装时注意有几处需要选择,第一处选择continue,第二处输入accept,第三处用空格取消driver,并选择install。这样选择防止再次安装其他版本的显卡驱动导致出问题。

    (4)配置环境变量

sudu gedit ~/.bashrc

        在后面加上

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
    export PATH=$PATH:/usr/local/cuda/bin
    export CUDA_HOME=$CUDA_HOME:/usr/local/cuda

        保存之后,记得source一下

source ~/.bashrc

查看cuda是否安装成功

nvcc -V

3.环境配置

下面正式开始配环境啦。

(1)下载安装openmpi,我也不知道这个到底需不需要,安了没有副作用。
wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.4.tar.gz
解压
cd openmpi-4.1.4
 
./configure –prefix=/usr/local/openmpi
 
make -j8
 
sudo make install
 
~/.bashrc文件里添加环境变量:
 
MPI_HOME=/usr/local/openmpi
OMPI_MCA_opal_cuda_support=true
export PATH=${MPI_HOME}/bin:$PATH
export LD_LIBRARY_PATH=${MPI_HOME}/lib:$LD_LIBRARY_PATH
export MANPATH=${MPI_HOME}/share/man:$MANPATH
 
测试安装是否成功
cd openmpi-x.x.x/examples
make
mpirun -np 4 hello_c

(2)创建虚拟环境并安装torch

conda create -n bevfusion_mit python=3.8
 
#在安装torch的时候指定cuda版本,不容易出问题,cu113指cuda 11.3
pip install torch==1.10.0+cu113 torchvision==0.11.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html
(3)安装下列...
#安装mmcv的时候同样指定cuda和torch版本,cu113指cuda 11.3
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html
 
pip install mmdet==2.20.0
 
conda install openmpi
 
conda install mpi4py
 
pip install Pillow==8.4.0
 
pip install tqdm
 
pip install torchpack
 
pip install nuscenes-devkit
 
pip install ninja
 
pip install numpy==1.19.5
 
pip install numba==0.48.0
 
pip install shapely==1.8.0

(4)下载BEVFusion代码并做一些小修改 

git clone https://github.com/mit-han-lab/bevfusion.git

1.把mmdet3d/ops/spconv/src/indice_cuda.cu里面的4096都改成256

               2.对于编译脚本 setup.py,需要把显卡算力改成自己对应的。在下面这个链接里查出你的显卡对应的算力,然后选择用哪一行:

        "-gencode=arch=compute_70,code=sm_70"

        "-gencode=arch=compute_75,code=sm_75"

        "-gencode=arch=compute_80,code=sm_80"

        "-gencode=arch=compute_86,code=sm_86"

CUDA GPUs - Compute Capability | NVIDIA Developer  查到3060显卡的算力为86.

3.开始编译 

python setup.py develop

4.数据准备

(1)下载nuScenes数据集,如果有需要就下载完整版,学习代码可以下载mini

nuScenes官网icon-default.png?t=N7T8https://www.nuscenes.org/nuscenes

(2)   把文件夹格式改成下面这个样子,注意nuscenes这个单词全部小写。

#正常版本,结构如下:
bevfusion-mit
├── tools
├── configs
├── data
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── lidarseg (optional)
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval
 
 
#如果下载的是mini版本,结构如下:
bevfusion-mit
├── tools
├── configs
├── data
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-mini

注意:还要下载Map expansion pack(v1.3) 然后解压到maps文件夹中。否则后面运行的时候会报错!

(3)接下来运行数据转换的脚本

#同样的,如果是正常版本,运行:
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
 
#如果是mini版本,运行:
python tools/create_data.py  nuscenes --root-path ./data/nuscenes/ --version v1.0-mini --out-dir data/nuscenes/ --extra-tag nuscenes

完成之后,文件夹是这样的,增加了几个.pkl文件和nuscenes_database。

#正常版本:
├── data
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── lidarseg (optional)
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval
│   │   ├── nuscenes_database
│   │   ├── nuscenes_infos_train.pkl
│   │   ├── nuscenes_infos_val.pkl
│   │   ├── nuscenes_infos_test.pkl
│   │   ├── nuscenes_dbinfos_train.pkl
 
#mini版本
├── data
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-mini
│   │   ├── nuscenes_database
│   │   ├── nuscenes_infos_train.pkl
│   │   ├── nuscenes_infos_val.pkl
│   │   ├── nuscenes_dbinfos_train.pkl

 (4)下载预训练权重

5.终端训练与测试

        (1)训练,官方给的是分布式训练:
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth
 (2)测试:
torchpack dist-run -np 1 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox

BevFusion是一种基于深度学习的三维点云重建方法,其主要思想是将点云转换为体素表示,然后使用神经网络进行体素的重建。以下是BevFusion的简单复现步骤: 1. 数据准备:使用3D扫描仪或其他方式获取物体的点云数据,并将其转换为体素表示。在这个过程中,需要选择合适的体素大小和分辨率,以保证重建的准确性和效率。 2. 构建神经网络:使用TensorFlow或PyTorch深度学习框架,搭建BevFusion的神经网络模型。该模型包括编码器、解码器和重建器三个部分,其中编码器负责将体素表示转换为低维特征向量,解码器负责将特征向量转换回体素表示,而重建器则负责整合编码器和解码器,实现点云重建。 3. 训练模型:使用已准备好的数据集对神经网络模型进行训练。训练过程中需要选择合适的损失函数和优化器,以及设置合理的训练参数(如学习率、批次大小、迭代次数等)。 4. 进行点云重建:使用训练好的神经网络模型对新的点云数据进行重建。在此过程中,需要将点云数据转换为体素表示,并输入到神经网络中进行重建。最终,可以得到重建后的点云数据,并进行后续的处理和分析。 需要注意的是,BevFusion复现过程较为复杂,需要具备一定的深度学习和计算机视觉基础,同时还需要大量的计算资源和时间。因此,建议在有相关经验或团队支持的情况下进行复现
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值