PaddleOCR服务器端部署C++ cpu或者gpu进行预测

简介

paddleocr的环境安装需要opencv和paddle C++环境。

因为服务器中的cmake版本低于3.15,可以通过cmake -version知道。

安装

1. cmake3.19.8安装

1)下载地址

https://github.com/Kitware/CMake/releases/download/v3.19.8/cmake-3.19.8-Linux-x86_64.tar.gz

2)安装

tar -xvf cmake-3.19.8-Linux-x86_64.tar.gz 
mv cmake-3.19.8-Linux-x86_64 cmake-3.19.8
vim ~/.bashrc
export PATH=/home/cxzx/cmake-3.19.8/bin:$PATH
source ~/.bashrc

2.opencv-3.4.7

1)下载和解压

wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
tar xvf opencv-3.4.7.tar.gz
cd opencv-3.4.7/
mkdir build && cd build
cmake .. \
    -DCMAKE_INSTALL_PREFIX=../opencv3 \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_SHARED_LIBS=OFF \
    -DWITH_IPP=OFF \
    -DBUILD_IPP_IW=OFF \
    -DWITH_LAPACK=OFF \
    -DWITH_EIGEN=OFF \
    -DCMAKE_INSTALL_LIBDIR=lib64 \
    -DWITH_ZLIB=ON \
    -DBUILD_ZLIB=ON \
    -DWITH_JPEG=ON \
    -DBUILD_JPEG=ON \
    -DWITH_PNG=ON \
    -DBUILD_PNG=ON \
    -DWITH_TIFF=ON \
    -DBUILD_TIFF=ON
make -j
make install

make install完成之后,会在opencv install的目录下生成opencv头文件和库文件,用于后面的OCR代码编译。最终在安装路径下的文件结构如下所示:

opencv3/
|-- bin
|-- include
|-- lib
|-- lib64
|-- share

3.编译paddle C++预测库

1)下载源码

git clone https://github.com/PaddlePaddle/Paddle.git

2) 编译

cd Paddle
mkdir build && cd build

cmake -DWITH_CONTRIB=OFF \
    -DWITH_MKL=OFF \
    -DWITH_MKLDNN=OFF  \
    -DWITH_TESTING=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DWITH_INFERENCE_API_TEST=OFF \
    -DON_INFER=ON \
    -DWITH_PYTHON=OFF \
    -DWITH_GPU=OFF \
    -DWITH_TENSORRT=OFF \
    -DWITH_NCCL=OFF ..
ulimit -n 63356 # 解决不同同时打开太多的文件,一般会报出Too many open files.
make -j
make inference_lib_dist

在make -j的时候出现

fatal: unable to access 'https://github.com/xianyi/OpenBLAS.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/dmlc/dlpack.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/herumi/xbyak.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/madler/zlib.git/': Failed to connect to github.com port 443: Connection timed out
Cloning into 'extern_openblas'...
Cloning into 'extern_dlpack'...
Cloning into 'extern_xbyak'...
Cloning into 'extern_zlib'...
fatal: unable to access 'https://github.com/gflags/gflags.git/': Empty reply from server
Cloning into 'extern_gflags'...
fatal: unable to access 'https://github.com/baidu-research/warp-ctc.git/': Failed to connect to github.com port 443: Connection timed out
Cloning into 'extern_warpctc'...
fatal: unable to access 'https://github.com/herumi/xbyak.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/dmlc/dlpack.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/xianyi/OpenBLAS.git/': Failed to connect to github.com port 443: Connection timed out
fatal: unable to access 'https://github.com/madler/zlib.git/': Failed to connect to github.com port 443: Connection timed out

这个主要原因是在编译paddle的时候,需要一些第三方库,但是这些第三方库是通过git下载源码,编译安装的(网络原因无法下载源码)。所以会报这些错误。

可以不用管这些,再次make -j,直到make -j成功。

安装完成之后可以看到

Paddle/build/paddle_inference_install_dir$ ls
CMakeCache.txt  paddle  third_party  version.txt

其中paddle文件

├── include
│   ├── crypto
│   │   └── cipher.h
│   ├── experimental
│   │   ├── complex128.h
│   │   ├── complex64.h
│   │   ├── ext_all.h
│   │   ├── ext_dispatch.h
│   │   ├── ext_dll_decl.h
│   │   ├── ext_dtype.h
│   │   ├── ext_exception.h
│   │   ├── ext_op_meta_info.h
│   │   ├── ext_place.h
│   │   ├── ext_tensor.h
│   │   └── float16.h
│   ├── internal
│   │   └── framework.pb.h
│   ├── paddle_analysis_config.h
│   ├── paddle_api.h
│   ├── paddle_infer_declare.h
│   ├── paddle_inference_api.h
│   ├── paddle_mkldnn_quantizer_config.h
│   ├── paddle_pass_builder.h
│   └── paddle_tensor.h
└── lib
    ├── libpaddle_inference.a
    └── libpaddle_inference.so

可以发现lib库里面的是libpaddle_inference.so,这个是paddle2.0+版本编译出来的so包,paddle1.1给的libpaddle_fluid.so。
 

4.paddleocr C++代码

因为编译的是paddle版本是2.0以上的,所以下载的paddleocr也要是2.0以上的。如果不同会报错。

1.下载paddleocr源码

git clone https://github.com/PaddlePaddle/PaddleOCR.git
cd PaddleOCR/deploy/cpp_infer/

2.下载模型ch_ppocr_server_v2.0_xx

https://github.com/PaddlePaddle/PaddleOCR

将检测分类识别模型放在PaddleOCR/deploy/cpp_infer/的inference文件夹(新建)里面,同时在./tools/config.txt修改模型路径

det_model_dir  ./inference/ch_ppocr_server_v2.0_det_infer/

# cls config
use_angle_cls 0
cls_model_dir  ./inference/ch_ppocr_mobile_v2.0_cls_infer/
cls_thresh  0.9

# rec config
rec_model_dir  ./inference/ch_ppocr_server_v2.0_rec_infer/
char_list_file ../../ppocr/utils/ppocr_keys_v1.txt

3.修改tools/build.sh

OPENCV_DIR=~/opencv347/opencv3/                                         #修改
LIB_DIR=~/paddleocr/Paddle/build/paddle_inference_install_dir/          #修改
#CUDA_LIB_DIR=your_cuda_lib_dir
#CUDNN_LIB_DIR=your_cudnn_lib_dir

BUILD_DIR=build
rm -rf ${BUILD_DIR}
mkdir ${BUILD_DIR}
cd ${BUILD_DIR}
cmake .. \
    -DPADDLE_LIB=${LIB_DIR} \
    -DWITH_MKL=OFF \     #修改
    -DWITH_GPU=OFF \
    -DWITH_STATIC_LIB=OFF \
    -DUSE_TENSORRT=OFF \
    -DOPENCV_DIR=${OPENCV_DIR} \
    # -DCUDNN_LIB=${CUDNN_LIB_DIR} \
    # -DCUDA_LIB=${CUDA_LIB_DIR} \

make -j

4.执行

./tools/build.sh
./tools/run.sh 

结果为

The detection visualized image saved in ./ocr_vis.png
The predicted text is :
上海斯格威铂尔大酒店    score: 0.991523
打浦路15号      score: 0.996377
绿洲仕格维花园公寓      score: 0.996051
打浦路25 29 35号        score: 0.943492
Cost  2.68492s

期间遇到过这个问题,主要原因你下载的paddleocr代码是根据paddle1.1写的,它里面的CMakeLists.txt文件还是libpaddle_fluid.so(paddle1.1的预测库),我编译的是libpaddle_inference.so(paddle2.0+的),解决方法去官网下载最新的paddleocr代码。

make[2]: *** No rule to make target '/home/zhangyt/paddleocr/Paddle/build/paddle_inference_install_dir/paddle/lib/libpaddle_fluid.so', needed by 'ocr_system'.  Stop.

5.paddleocr C++ gpu版本

5.1paddle重新编译

paddle编译需要cuda10.1+

cd Paddle
cd build
rm -rf *

cmake -DWITH_CONTRIB=OFF \
    -DWITH_MKL=OFF \
    -DWITH_MKLDNN=OFF  \
    -DWITH_TESTING=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DWITH_INFERENCE_API_TEST=OFF \
    -DON_INFER=ON \
    -DWITH_PYTHON=OFF \
    -DWITH_GPU=ON \
    -DWITH_TENSORRT=OFF \
    -DWITH_NCCL=OFF ..
ulimit -n 63356 
make -j
make inference_lib_dist -j

5.2修改paddleocr

修改tools/build.sh

OPENCV_DIR=~/opencv347/opencv3/                                         #修改
LIB_DIR=~/paddleocr/Paddle/build/paddle_inference_install_dir/          #修改
CUDA_LIB_DIR=/usr/local/cuda-10.1/lib64                                 #修改
CUDNN_LIB_DIR=/usr/local/cuda-10.1/lib64                                #修改

BUILD_DIR=build
rm -rf ${BUILD_DIR}
mkdir ${BUILD_DIR}
cd ${BUILD_DIR}
cmake .. \
    -DPADDLE_LIB=${LIB_DIR} \
    -DWITH_MKL=OFF \
    -DWITH_GPU=OFF \
    -DWITH_STATIC_LIB=OFF \
    -DUSE_TENSORRT=OFF \
    -DOPENCV_DIR=${OPENCV_DIR} \
    -DCUDNN_LIB=${CUDNN_LIB_DIR} \
    -DCUDA_LIB=${CUDA_LIB_DIR} \


make -j

修改build/config.txt

use_gpu 1
gpu_id  0

执行

./tools/build.sh
./tools/run.sh 

发现推理时间变长,应为使用gpu加载时间变长了,推理的话也就50多ms,

The predicted text is :
我一直在承受我这个年纪  score: 0.998935
不该有的帅气和机智      score: 0.977983
我好累  score: 0.982173
二      score: 0.637757
Cost  0.052505s

如果服务器是多gpu,可能需要

export CUDA_VISIBLE_DEVICES=0

参考

https://blog.csdn.net/u013171226/article/details/115398058

https://www.paddlepaddle.org.cn/documentation/docs/en/develop/guides/05_inference_deployment/inference/build_and_install_lib_en.html

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值