python alpha_Python汇总项目

News!

Dec 2019: v0.3.0 version of AlphaPose is released! Smaller model, higher accuracy!

Apr 2019: MXNet version of AlphaPose is released! It runs at 23 fps on COCO validation set.

Feb 2019: CrowdPose is integrated into AlphaPose Now!

Dec 2018: General version of PoseFlow is released! 3X Faster and support pose tracking results visualization!

Sep 2018: v0.2.0 version of AlphaPose is released! It runs at 20 fps on COCO validation set (4.6 people per image on average) and achieves 71 mAP!

AlphaPose

AlphaPose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset.

To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!

Results

Pose Estimation

Results on COCO test-dev 2015:

Method

AP @0.5:0.95

AP @0.5

AP @0.75

AP medium

AP large

OpenPose (CMU-Pose)

61.8

84.9

67.5

57.1

68.2

Detectron (Mask R-CNN)

67.0

88.0

73.1

62.2

75.6

AlphaPose

72.3

89.2

79.1

69.0

78.6

Results on MPII full test set:

Method

Head

Shoulder

Elbow

Wrist

Hip

Knee

Ankle

Ave

OpenPose (CMU-Pose)

91.2

87.6

77.7

66.8

75.4

68.9

61.7

75.6

Newell & Deng

92.1

89.3

78.9

69.8

76.2

71.6

64.7

77.5

AlphaPose

91.3

90.5

84.0

76.4

80.3

79.9

72.4

82.1

More results and models are available in the docs/MODEL_ZOO.md.

Pose Tracking

Please read PoseFlow/README.md for details.

CrowdPose

Please read docs/CrowdPose.md for details.

Installation

Please check out docs/INSTALL.md

Model Zoo

Quick Start

Inference: Inference demo

./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional

Training: Train from scratch

./scripts/train.sh ${CONFIG} ${EXP_ID}

Validation: Validate your model on MSCOCO val2017

./scripts/validate.sh ${CONFIG} ${CHECKPOINT}

Examples:

Demo using FastPose model.

./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}

#or

python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/

Train FastPose on mscoco dataset.

./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose

More detailed inference options and examples, please refer to GETTING_STARTED.md

Common issue & FAQ

Check out faq.md for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is maintained by Jiefeng Li*, Hao-shu Fang*, Yuliang Xiu and Chao Xu.

The main contributors are listed in doc/contributors.md.

TODO

Multi-GPU/CPU inference

3D pose

add tracking flag

PyTorch C++ version

Add MPII and AIC data

dense support

small box easy filter

Crowdpose support

Speed up PoseFlow

Add stronger/light detectors and the mobile pose

High level API

We would really appreciate if you can offer any help and be the contributor of AlphaPose.

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{fang2017rmpe,

title={{RMPE}: Regional Multi-person Pose Estimation},

author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},

booktitle={ICCV},

year={2017}

}

@article{li2018crowdpose,

title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},

author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},

journal={arXiv preprint arXiv:1812.00324},

year={2018}

}

@inproceedings{xiu2018poseflow,

author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},

title = {{Pose Flow}: Efficient Online Pose Tracking},

booktitle={BMVC},

year = {2018}

}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值