论文:Real-Time Radar-Based GestureDetection and Recognition Builtin an Edge-Computing Platform(Sensor)

Publication: IEEE SENSORS JOURNAL, VOL. 20, NO. 18, SEPTEMBER 15, 2020

Objective: Radar-based system to recognize gestures with low complexity 

Existing problem: the proposed 2-D CNN, 3-D CNN and LSTM for gesture classification require huge amounts of memory in the system, and are computationally inefficient.

Method: Feature cube (range, Doppler, azimuth and elevation) as input of a shallow CNN

Performance: classifying 12 gestures in real-time with a high F1-score.

Key words: Gesture classification, Edge-Computing Platform


 

Main contributions:

  1. The proposed signal processing framework is able to recognize more gestures (12 gestures) than those reported in other works in the literature. The framework can run in real-time built in an edge-computing platform with limited memory and computational capability.
  2. We develop a multi-feature encoder to construct the gesture profile, including range, Doppler, azimuth, elevation and temporal information into a feature cube with reduced dimensions for the sake of data processing efficiency.
  3. We develop an HAD algorithm based on the concept of short-term average/long-term average to reliably detect the tail of a gesture.
  4. Since the proposed multi-feature encoder has encoded all necessary information in a compact manner, it is possible to deploy a shallow CNN with a feature cube as its input to achieve a promising classification performance.
  5. The proposed framework is evaluated twofold: its performance is compared with the benchmark in off-line scenario, and its recognition ability in real-time case is assessed as well.

 STEP1: Feature cube 

we encode the range, Doppler, azimuth, elevation and magnitude of those K points with the largest magnitudes in RD(p, q) along IL measurement-cycles into the feature cube
V with dimension IL × K × 5.

STEP2: Hand activity detection (similar as VAD)

Proposed STA/LTA-based gesture detector to detect when a gesture finishes, i.e., the tail of a gesture, rather than detecting the start time-stamp.

STEP3: SUPERVISED LEARNING

Refer to Fig 6.

Experiment:

The radar is connected with an edge-computing platform, i.e., NVIDIA Jetson Nano, which is equipped with Quad-core ARM A57 at 1.43 GHz as central processing unit (CPU), 128-core Maxwell as graphics processing unit (GPU) and 4 GB memory.

Results:

Its performance is thoroughly compared with benchmarks in literature through an off-line crossvalidation, and secondly, its real-time capability is investigated with an on-line performance test.

OFFLINE TEST 

1) Classification Accuracy and Training Loss Curve (Table II)

2) Confusion Matrix (Fig. 11)

3) Computational Complexity and Memory (Table III)

ONLINE TEST

1) Precision, Recall and F1-Score(Table IV)

2) Detection Matrix (Table V)

3) Run time (Table VI)

 

 

 

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值