mmdeploy 部署 mmdetection3d 模型

先说环境

python			--- 3.8.16
CUDA			---	11.6
cudnn			---	8.4.1
TensorRT		---	8.4.3.1	GA
PyTorch			---	1.12.1
2023-03-24 13:25:17,447 - mmdeploy - INFO - 

2023-03-24 13:25:17,447 - mmdeploy - INFO - **********Environmental information**********
2023-03-24 13:25:17,612 - mmdeploy - INFO - sys.platform: linux
2023-03-24 13:25:17,612 - mmdeploy - INFO - Python: 3.8.16 (default, Mar  2 2023, 03:21:46) [GCC 11.2.0]
2023-03-24 13:25:17,612 - mmdeploy - INFO - CUDA available: True
2023-03-24 13:25:17,612 - mmdeploy - INFO - GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
2023-03-24 13:25:17,612 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda-11.6
2023-03-24 13:25:17,612 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11.6, V11.6.124
2023-03-24 13:25:17,612 - mmdeploy - INFO - GCC: gcc (Ubuntu 8.4.0-3ubuntu2) 8.4.0
2023-03-24 13:25:17,612 - mmdeploy - INFO - PyTorch: 1.12.1
2023-03-24 13:25:17,612 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.6
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.3.2  (built against CUDA 11.5)
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

2023-03-24 13:25:17,612 - mmdeploy - INFO - TorchVision: 0.13.1
2023-03-24 13:25:17,612 - mmdeploy - INFO - OpenCV: 4.7.0
2023-03-24 13:25:17,612 - mmdeploy - INFO - MMCV: 1.6.2
2023-03-24 13:25:17,612 - mmdeploy - INFO - MMCV Compiler: GCC 9.3
2023-03-24 13:25:17,612 - mmdeploy - INFO - MMCV CUDA Compiler: 11.6
2023-03-24 13:25:17,612 - mmdeploy - INFO - MMDeploy: 0.13.0+39c3282
2023-03-24 13:25:17,612 - mmdeploy - INFO - 

2023-03-24 13:25:17,612 - mmdeploy - INFO - **********Backend information**********
2023-03-24 13:25:17,638 - mmdeploy - INFO - tensorrt:	8.4.3.1
2023-03-24 13:25:17,638 - mmdeploy - INFO - tensorrt custom ops:	Available
2023-03-24 13:25:17,732 - mmdeploy - INFO - ONNXRuntime:	1.14.1
2023-03-24 13:25:17,732 - mmdeploy - INFO - ONNXRuntime-gpu:	None
2023-03-24 13:25:17,732 - mmdeploy - INFO - ONNXRuntime custom ops:	NotAvailable
2023-03-24 13:25:17,733 - mmdeploy - INFO - pplnn:	None
2023-03-24 13:25:17,734 - mmdeploy - INFO - ncnn:	None
2023-03-24 13:25:17,734 - mmdeploy - INFO - snpe:	None
2023-03-24 13:25:17,735 - mmdeploy - INFO - openvino:	None
2023-03-24 13:25:17,735 - mmdeploy - INFO - torchscript:	1.12.1
2023-03-24 13:25:17,735 - mmdeploy - INFO - torchscript custom ops:	NotAvailable
2023-03-24 13:25:17,782 - mmdeploy - INFO - rknn-toolkit:	None
2023-03-24 13:25:17,782 - mmdeploy - INFO - rknn2-toolkit:	None
2023-03-24 13:25:17,783 - mmdeploy - INFO - ascend:	None
2023-03-24 13:25:17,783 - mmdeploy - INFO - coreml:	None
2023-03-24 13:25:17,783 - mmdeploy - INFO - tvm:	None
2023-03-24 13:25:17,783 - mmdeploy - INFO - 

2023-03-24 13:25:17,783 - mmdeploy - INFO - **********Codebase information**********
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmdet:	2.28.2
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmseg:	0.30.0
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmcls:	0.25.0
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmocr:	None
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmedit:	None
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmdet3d:	1.0.0rc6
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmpose:	None
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmrotate:	None
2023-03-24 13:25:17,785 - mmdeploy - INFO - mmaction:	None

最好使用以下结构,方便设置目录
在这里插入图片描述

1. 检查工具版本

Cmake版本大于>= 3.14.0

wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0-linux-x86_64.tar.gz
tar -xzvf cmake-3.20.0-linux-x86_64.tar.gz
sudo ln -sf $(pwd)/cmake-3.20.0-linux-x86_64/bin/* /usr/bin/

GCC 7+

# 如果 Ubuntu 版本 < 18.04,需要加入仓库
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-7
sudo apt-get install g++-7

2. 创建环境

# 创建虚拟环境
conda create -n mmdeploy python=3.8
# 安装torch
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
# 安装mmcv
pip3 install openmim
mim install mmcv-full==1.6.2
# cmake
pip3 install cmake

3. MMDeploy SDK 依赖

# Ubuntu 18.04 及以上版本 
sudo apt install libspdlog-dev
# opencv >= 3.0
sudo apt install libopencv-dev
# pplcv
git clone https://github.com/openppl-public/ppl.cv.git
cd ppl.cv
export PPLCV_DIR=$(pwd)
git checkout tags/v0.7.0 -b v0.7.0
./build.sh cuda

4. 推理引擎

下载TensorRT,8.4.3 GA

# 下载
cd TensorRT压缩包目录
tar -zxvf 压缩包文件
pip3 install TensorRT-8.4.3.1/python/tensorrt-8.4.3.1-cp38-none-linux_x86_64.whl
cd TensorRT-8.4.3.1/
# .bashrc中添加环境变量
export TENSORRT_DIR=/home/txz/MMD/mmdeploy/TRT/TensorRT-8.4.3.1
export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$LD_LIBRARY_PATH
pip install pycuda

下载cudnn8.2.1.32

cd 压缩包目录
tar -zxvf 压缩包
# .bashrc中添加环境变量
export CUDNN_DIR=/home/txz/MMD/mmdeploy/cudnn/cudnn-11.3-linux-x64-v8.2.1.32/cuda
export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH

5. 编译MMDeploy

git clone --branch=0.x https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
# # .bashrc中添加环境变量
export MMDEPLOY_DIR=/home/txz/MMD/mmdeploy/mmdeploy
source ~/.bashrc
mkdir -p build && cd build
cmake -DCMAKE_CXX_COMPILER=g++-9 -DMMDEPLOY_TARGET_BACKENDS=trt -DTENSORRT_DIR=${TENSORRT_DIR} -DCUDNN_DIR=${CUDNN_DIR} ..
make -j$(nproc) && make install
cd ${MMDEPLOY_DIR}
pip install -e .

6. 编译SDK

cd ${MMDEPLOY_DIR}
mkdir -p build && cd build
cmake .. \
    -DCMAKE_CXX_COMPILER=g++-9 \
    -DMMDEPLOY_BUILD_SDK=ON \
    -DMMDEPLOY_BUILD_EXAMPLES=ON \
    -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
    -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
    -DMMDEPLOY_TARGET_BACKENDS=trt \
    -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \
    -DTENSORRT_DIR=${TENSORRT_DIR} \
    -DCUDNN_DIR=${CUDNN_DIR}
make -j$(nproc) && make install

7. 转换模型并可视化TRT与Torch检测结果

  1. 注意传入参数的含义
    export MMDET3D_DIR=/home/txz/MMD/mmdetection3d
    export MMDEPLOY_DIR=/home/txz/MMD/mmdeploy
    
    export MODEL_PATH=https://download.openmmlab.com/mmdetection3d/v1.0.0_models/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20220301_150306-37dc2420.pth
    
    
    python3 ${MMDEPLOY_DIR}/tools/deploy.py \
        ${MMDEPLOY_DIR}/configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti-32x4.py \
        /home/txz/MMD/mmdetection3d/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py \
        ${MODEL_PATH} \
        /home/txz/MMD/mmdetection3d/demo/data/kitti/kitti_000008.bin \
        --test-img /home/txz/MMD/mmdetection3d/demo/data/kitti/kitti_000008.bin \
        --work-dir work-dir \
        --device cuda:0 \
        --show
    
  2. 如果有时候报错了,不一定是配置错误,可以输入下面指令
    unset LD_LIBRARY_PATH
    source ~/.bashrc
    

7.Python使用后端推理

  1. 在ipynb中有时运行不成功(重启内核也没用),可以把vscode重新打开,然后在运行
# 导入库
import time
import torch
import numpy as np
from mmdeploy.utils import get_input_shape, load_config
from mmdeploy.apis.utils import build_task_processor

# 加载配置
device = "cuda:0"
model_cfg = "/home/txz/MMD/mmdeploy/mmdeploy/checkpoints_mmdet3d/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py"
deploy_cfg = "/home/txz/MMD/mmdeploy/mmdeploy/configs/mmdet3d/voxel-detection/voxel-detection_tensorrt_dynamic-kitti-32x4.py"
deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)
task_processor = build_task_processor(model_cfg, deploy_cfg, device)

# 加载后端
backend_files = ["/home/txz/MMD/mmdeploy/mmdeploy/work_dir/end2end.engine"]
model = task_processor.init_backend_model(backend_files)

# 初次获得输入
points = "/home/txz/MMD/mmdeploy/mmdeploy/checkpoints_mmdet3d/test.bin"
input_shape = get_input_shape(deploy_cfg)
model_inputs, _ = task_processor.create_input(points, input_shape)


# 后续的输入
points = np.fromfile("/home/txz/MMD/mmdeploy/mmdeploy/checkpoints_mmdet3d/test.bin", dtype=np.float32).reshape(-1,4)
while 1:
    start = time.time_ns()
    input_shape = get_input_shape(deploy_cfg)
    model_inputs['points'][0][0] = torch.tensor(points, device='cuda:0')
    # model_inputs, _ = task_processor.create_input(points, input_shape)
    with torch.no_grad():
    	result = task_processor.run_inference(model, model_inputs)
    end = time.time_ns()
    fps = 1 / ((end - start)/10**9)
    fps = round(fps, 2)
    print("FPS : ", fps)

部署centerpoin

pip3 install mmsegmentation==1.0.0rc0
pip3 install mmdet==3.0.0rc1
mim install mmcv==2.0.0rc1

  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
使用mmdeploy可以将mmdetection模型转换为onnx格式,让模型在不同的平台和框架之间进行转换和迁移。同时,mmdeploy-sdk可以使用c语言进行实现,可以将onnx格式的模型部署到C/C++环境中。 要使用mmdeploymmdetection模型转换为onnx,首先需要安装mmcv-full和mmdetection。安装完成后,使用以下命令将模型转换为onnx: ``` python tools/deploy/export_onnx.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${ONNX_FILE} --input-img ${IMG_SHAPE} ``` 其中${CONFIG_FILE}为模型的配置文件,${CHECKPOINT_FILE}为训练好的模型文件,${ONNX_FILE}为生成的onnx文件名称,${IMG_SHAPE}为输入图像的形状。 转换为onnx后,可以使用mmdeploy-sdk对模型进行部署。首先需要在C环境下使用mmdeploy-sdk的API读取onnx模型文件,然后使用C语言的库函数对模型进行推理。 使用mmdeploy-sdk的API读取onnx模型文件,代码如下: ``` #include <stdlib.h> #include <string.h> #include <assert.h> #include "mmdeploy.h" int main(int argc, char **argv) { flexbuffer *model; mmsession *session; const char *model_path = argv[1]; /* 读取onnx模型文件 */ model = read_flexbuffer_file(model_path); assert(model); /* 创建session */ session = create_session(model); assert(session); /* 设置输入数据 */ set_input_data(session, input_data); /* 进行推理,获取输出数据 */ get_output_data(session, output_data); return 0; } ``` 使用C语言的库函数对模型进行推理,主要的工作是对输入数据进行预处理,并调用session的run方法进行推理,代码如下: ``` #include <stdlib.h> #include <string.h> #include <assert.h> #include "mmdeploy.h" int main(int argc, char **argv) { flexbuffer *model; mmsession *session; float *input_buf, *output_buf; /* 预处理输入数据 */ input_buf = preprocess_input(input_data); /* 读取onnx模型文件 */ model = read_flexbuffer_file(argv[1]); assert(model); /* 创建session */ session = create_session(model); assert(session); /* 设置输入数据 */ set_input_data(session, input_data); /* 进行推理,获取输出数据 */ get_output_data(session, output_data); /* 对输出数据进行后处理 */ output_buf = postprocess_output(output_data); return 0; } ``` 在使用mmdeploy-sdk进行C语言实现时,需要注意模型的输入和输出数据类型和形状,以及预处理和后处理函数的编写。通过以上步骤,就可以使用mmdeploymmdetection模型转换为onnx,并使用mmdeploy-sdk实现C语言部署

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

酸奶可乐

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值