Stacked Hourglass Networks for Human Pose Estimation - Demo Code
Stacked Hourglass Networks for Human Pose Estimation
- Project
- Demo Code – pose-hg-demo
- Pre-trained model
- Training code – pose-hg-train
pose-hg-demo主要包含文件及文件夹内容:
这里基于Docker、python和pose-hg-demo.
1. 拉取Torch7镜像
$ sudo nvidia-docker pull registry.cn-hangzhou.aliyuncs.com/docker_learning_aliyun/torch:v1
2. 运行 Demo on MPII Human Pose dataset
下载MPII Human Pose dataset,并将图片放在 images 文件夹.
$ sudo nvidia-docker run -it --rm -v /path/to/pose-hg-demo-master:/media registry.cn-hangzhou.aliyuncs.com/docker_learning_aliyun/torch:v1
# 进入Torch镜像
root@8f1548fc3b34:~/torch#
cd /media # 即主机中的 pose-hg-demo-master
th main.lua predict-test # 得到人体姿态估计结果,并保存在'preds/test.h5'中
利用下面的python脚本可视化人体姿态结果:
#!/usr/bin/env python
import h5py
import scipy.misc as scm
import matplotlib.pyplot as plt
test_images = open('../annot/test_images.txt','r').readlines()
images_path = './images/'
f = h5py.File('./preds/test.h5','r')
preds = f['preds'][:]
f.close()
assert len(test_images) == len(preds)
for i in range(len(test_images)):
filename = images_path + test_images[i][:-1]
im = scm.imread(filename)
pose = preds[i]
plt.axis('off')
plt.imshow(im)
for i in range(16):
if pose[i][0] > 0 and pose[i][1] > 0:
plt.scatter(pose[i][0], pose[i][1], marker=<