(1)下载open_model_zoo,在github上下载预训练模型:
open_model_zoo
(2)运行模型下载工具,下载预训练模型:
下载全部模型,不推荐,空间很大:
python D:\open_model_zoo-master\tools\model_tools\downloader.py --all --output_dir D:\intel_model_zoo_src
下载指定预训练模型:
以人体姿势预测模型为例:
python D:\open_model_zoo-master\tools\model_tools\downloader.py --name human-pose-estimation-0005 --output_dir D:\intel_model_zoo_src
说明:
# human-pose-estimation-0005
## Use Case and High-Level Description
This is a multi-person 2D pose estimation network based on the EfficientHRNet approach (that follows the Associative Embedding framework).
For every person in an image, the network detects a human pose: a body skeleton consisting of keypoints and connections between them.
The pose may contain up to 17 keypoints: ears, eyes, nose, shoulders, elbows, wrists, hips, knees, and ankles.
## Specification
| Metric | Value |
|---------------------------------|-------------------------------------------|
| Average Precision (AP) | 45.6% |
| GFlops | 5.9206 |
| MParams | 8.1506 |
| Source framework | PyTorch\* |
## Inputs
Image, name: `image`, shape: `1, 3, 288, 288` in the `B, C, H, W` format, where:
- `B` - batch size
- `C` - number of channels
- `H` - image height
- `W` - image width
Expected color order is `BGR`.
## Outputs
The net outputs are two blobs:
1. `heatmaps` of shape `1, 17, 144, 144` containing location heatmaps for keypoints of all types. Locations that are filtered out by non-maximum suppression algorithm have negated values assigned to them.
2. `embeddings` of shape `1, 17, 144, 144, 1` containing associative embedding values, which are used for grouping individual keypoints into poses.
下载后文件样式为:
(3)使用相关demo:
找到相关demo:
D:\open_model_zoo-master\demos\human_pose_estimation_demo
将主文件加入VS工程,按DEMO引用的头文件补充进VS的附加包含目录:
D:\open_model_zoo-master\demos\common\cpp\monitors\include
D:\open_model_zoo-master\demos\common\cpp\pipelines\include
D:\open_model_zoo-master\demos\common\cpp\utils\include
D:\open_model_zoo-master\demos\common\cpp\models\include
生成程序,提示缺少gflags/gflags.h,其是谷歌的命令行解析管理器:
好像OPENVINO2022版本没附带/thirdparty/gflags:
在C:\Program Files (x86)\Intel\openvino_2022.1.0.643\samples\cpp\thirdparty\gflags\gflags
目录下将gflags相关文件复制到/thirdparty/gflags下。
(4)编译demo源码获取缺失的其它lib:
管理员运行D:\open_model_zoo-master\demos\build_demos_msvc.bat
指定VS版本:
C:\Program Files (x86)\Intel\openvino_2022.1.0.643>D:\open_model_zoo-master\demos\build_demos_msvc.bat VS2019
编译时报错:
CMake Error at CMakeLists.txt:140 (find_package):
By not providing "FindOpenCV.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "OpenCV", but
CMake did not find one.
解决方法是,在139行加入:
set(OpenCV_DIR D:/opencv64-4.5.0/opencv-4.5.0/x64/vc14/lib)
路径是自己的OpenCVConfig.cmake路径
重新编译:
报错,新错误提示为:
CMake Error at CMakeLists.txt:144 (add_subdirectory):
The source directory
D:/open_model_zoo-master/demos/thirdparty/gflags
does not contain a CMakeLists.txt file.
编译结果在:
C:\Users\25360\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release文件夹内。
将其中生成的若干静态库lib文件添加至VS工程中,总计有:
pdh.lib
multi_channel_common.lib
shlwapi.lib
gflags_nothreads_static.lib
utils.lib
models.lib
pipelines.lib
utils_gapi.lib
monitors.lib
openvino.lib
openvino_c.lib
openvino_ir_frontend.lib
openvino_onnx_frontend.lib
openvino_paddle_frontend.lib
openvino_tensorflow_fe.lib
opencv_world450.lib
其中shlwapi.lib及pdh.lib是需要添加的,不然会报无法解析PdhSetCounterScaleFactor及无法解析的符号__imp_PathMatchSpecA。
运行:
human_pose_estimation_demo.exe -d=CPU -i=C:\Users\25360\Desktop\1.jpg -m=“D:\c++_openvino\src\bin\Release\human-pose-estimation-0005.xml” -at=“ae”
使用demo即可.