1.安装anaconda, 可参考网上的安装教程,这里不再赘述。
官网下载:https://www.anaconda.com/distribution/#download-section
2.安装显卡驱动和cuda-10.1+cudnn.,可以参考我之前博客
https://mp.csdn.net/console/editor/html/105434809
3.AlphePose代码安装:
# 1.1 Create a conda virtual environment.
conda create -n alphapose python=3.6 -y
conda activate alphapose
# 1.2 Install PyTorch
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
# 1.3 Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose
# 1.4 install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
python -m pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop
Install with pip
# 1. Install PyTorch
pip install torch torchvision
# 2. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose
# 3. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop --user
4.模型下载
4.1 Download the object detection model manually: yolov3-spp.weights(Google Drive | Baidu pan). Place it into detector/yolo/data
.
4.2 For pose tracking, download the object tracking model manually: JDE-1088x608-uncertainty(Google Drive | Baidu pan). Place it into detector/tracker/data
.
4.3 Download our pose models. Place them into pretrained_models
. All models and details are available in our Model Zoo.
只是测试识别的话可以不用下载4.2的模型
5.运行
识别视频
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video AlphaPose_video.avi --outdir examples/res --detector yolo --save_img --save_video
识别图片
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/