Installation官网教程 :
Inference with existing models — MMPose 1.3.2 documentation
Installation — MMPose 1.3.2 documentation
env:Ubuntu22.04 CUDA11.6, NVIDIA RTX3090Ti, 其他软件版本见下图:
conda create --name openmmlab python=3.8 -y
conda activate openmmlab
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -U openmim
mim install mmengine
mim install mmcv==2.1.0
mim install mmdet==3.2.0
Build MMPose from source
git clone https://github.com/open-mmlab/mmpose.git
cd mmpose
pip install -r requirements.txt
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
Terminal : mmpose 项目根目录下运行 RTMPose,ViTPose,YoloPose 模型
python demo/inferencer_demo.py /video_or_images_path \
--pose2d vitpose-h \
--pred-out-dir ../output_dir/ \
--skeleton-style openpose \
--device cuda:0 //device 指定GPU 索引0
更多参数设置详细阅读mmpose项目代码:mmpose/demo/inferencer_demo.py
Available 2D model aliases and their corresponding configuration names:
python demo/inferencer_demo.py --show-alias