【论文阅读】:NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis

NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis

(2016 CVPR)

Amir Shahroudy, Jun Liu, Tian-Tsong Ng, Gang Wang

Notes

论文链接:https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Shahroudy_NTU_RGBD_A_CVPR_2016_paper.pdf

Github:https://github.com/shahroudy/NTURGB-D

数据集链接:

Contribution

1、introduce a large-scale dataset for RGB+D human action recognition

2、propose a new recurrent neural network structure to model the long-term temporal correlation ofthe features for each body part

Limitations in Previous 3D Action Recognition Benchmarks

1、the small number of subjects and very narrow range of performers’ ages, which makes the intra-class variation of the actions very limited.

2、only a very small number of classes are available.

3、the highly restricted camera views. For most of the datasets, all the samples are captured from a front view with a fixed camera viewpoint.

4、the highly limited number of video samples prevents us from applying the most advanced data-driven learning methods to this problem.

Details of NTU RGB+D

1、the number of RGB+D video samples:56, 880

2、40 different human subjects

3、60 action classes in total:

  • 40 daily actions (drinking, eating, reading, etc.),
  • 9 health-related actions (sneezing, staggering, falling down, etc.), and
  • 11 mutual actions (punching, kicking, hugging, etc.).

4、Hardware:Microsoft Kinect v2

5、Data Modality:

  • RGB videos【1920 × 1080】,
  • depth sequences【512 × 424】,
  • skeleton data (3D locations of 25 major body joints), and
  • infrared frames【512 × 424】

6、80(17 * 5)distinct camera viewpoints:

  • used three cameras at the same time to capture three different horizontal views from the same action.
  • For each setup, the three cameras were located at the same height but from three different horizontal angles: −45◦, 0◦, +45◦.
  • Each subject was asked to perform each action twice, once towards the left camera and once towards the right camera. 
  • In this way, we capture two front views, one left side view, one right side view, one left side 45 degrees view, and one right side 45 degrees view. The three cameras are assigned consistent camera numbers. Camera 1 always observes the 45 degrees views, while camera 2 and 3 observe front and side views.

7、The age range of the subjects in our dataset is from 10 to 35 years

8、limited to indoor scenes, but we provide the ambiance inconstancy by capturing in various background conditions

9、cross-subject and cross-view evaluations metrics

 

Benchmark Evaluations

1、cross-subject

  • split the 40 subjects into training and testing groups. Each group consists of 20 subjects.
  • For this evaluation, the training and testing sets have 40, 320 and 16, 560 samples, respectively.
  • The IDs of training subjects in this evaluation are: 1, 2, 4, 5, 8, 9, 13, 14, 15, 16, 17, 18, 19, 25, 27, 28, 31, 34, 35, 38;
  • remaining subjects are reserved for testing.

2、Cross-View Evaluation

  • we pick all the samples of camera 1 for testing and samples of cameras 2 and 3 for training.
  • In other words, the training set consists of front and two side views of the actions,
  • while testing set includes left and right 45 degree views of the action performances.
  • For this evaluation, the training and testing sets have 37, 920 and 18, 960 samples, respectively.

Part-Aware LSTM Network

In our model, we group the body joints into five part groups: torso, two hands, and two legs. 

1、Traditional RNN and LSTM

 2、Proposed P-LSTM

 Experimantal Results

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值