Position and Orientation Agnostic Gesture Recognition Using WiFi
Aditya Virmani and Muhammad Shahzad
Department of Computer Science
North Carolina State University
Raleigh, North Carolina, USA
MobiSys’17, June 19-23, 2017, Niagara Falls, NY, USA
Overview
- The key component of WiAG is our novel theoretically grounded translation function that can generate virtual samples of a given gesture in any desired configuration using a real sample of that gesture collected from the user in a known configuration.
因为在不同位置不同朝向做同一个手势会出现不同的波形,所以提出了一个转换函数,可以将一个已知的一个setting下的手势通过该函数提取virtual sample用于分类,此时用户只需要提供一个setting下的手势样本即可 - it generates a k-nearest neighbor (k-NN) based classification model using the virtual samples of the gestures in that configuration.
使用K-NN算法分类,其次识别过程是先识别configuration 然后再识别
Technical Challenges
- 模型建立:先量化类质点物体在线性运动中会对信号引起的变化,然后研究人的肢体在线性运动中会引起的变化,最后是arm in non-linear motion
- automatically estimate user’s configuration: 为了衡量位置和朝向,在上述模型引出的公式中,包含了位置和朝向这两个参数
- make WiAG resilient to static changes in the environment :it characterizes the change in CSI measurements instead of the absolute CSI measurements. the model only captures information about the multi-paths that are affected by moving objects 仅采用动态路径变化引起的改变作为使用的数据
手写笔记