【RFID_paper】Action Sensing

From: https://liudongdong1.github.io/

level: ACM数据库 Embedded Networked Sensor Systems CCF_B
author: Yinggang Yu ,Dong Wang, Run Zhao, Qian Zhang ShangHaiJiaoTongUniversity
date: 2019 .11
keyword:

  • RFID,wireless sensing ,ongoing gesture recognition,adversarial learning

Paper: RFID ongoing Gesture


RFID Based Real-time Recognition of ongoing Gesture with Adversarial Learning
Summary
  1. 通过一个阅读器多个标签进行实验,使用CNN来分别提取相位和RSSI特征,并将特征连接,通过LSTM网络得到一个行为概率分布向量,通过SVM支持向量机模型来预测某个行为。
Research Objective
  • Application Area:
    • gesture-driven applications: input for video games have paramount and unavoidable issue about latency between completion of a gesture and its recognition
  • Purpose: fusing multimodal RFID data and extracting space-temporal information to enable a general ,pervasive ,environment independent ,user-invariant and real-time gesture-driven interactive system
Proble Statement
  • RFID based gesture recognition methods can handle gestures due to dynamic environment and degrade significantly when it comes to user diversity and environment variety
  • recent works on gesture detection are designed for recognition gesture after it is completed,there remain latency.

previous work:

  • Gesture Recognition
    • wearable sensor based: wrist-worn devices containing inertial sensors are utilized to recognize the eating gesture[28], identify smoking gesture[27], translate sign language[39, 41] ,identify fine-grained interactive gesture[12] require users to charge devices ios phone will deploy UWB ,what if achieve gesture recognize using UWB
    • Camera based: Leap Motion or Kinect to build gesture recognition systems to enable sign language translation at both word and sentence levels with high accuracy [9], recognize continuous sign language with weakly supervised learning [5] privacy ,light sensitive, NLOS
    • Wireless based: ultra-sound, WIFI, RFID ,mmWave active gestures recognition[19, 20, 36] ,sign language translation[22],keystrokes recognition [1], limb-level gesture detection[3, 6, 29, 44] work well in relatively limited and controlled environment and achieve high accuracy on some special users,but performance may degrade in some unstable or uncontrolled environments
  • Ongoing gesture detection:
    • [33] propose a smart watch based early gesture detection technology
    • [14] use deep learning techniques to predict twenty-five hand gestures online from videos
    • [15] design computer vision based early event detectors which enable sign language translation ,emotions recognitions
  • Domain Adversarial learning: learn robust representations which are discriminative on source domain while invariant between domains[10, 32]
    • [42] combine CNN and RNN with adversarial learning to extract sleep-specific and subject-invariant features from RF signals to predict sleep stage
    • [24] utilizes multi-task optimization to reduce variance across speakers and keep representations senone-discriminative
    • [31] a robust end-to-end speech recognition framework using generative adversarial network in a data-driven way.
    • differs from above , EUIGR takes the ongoing gesture into consideration and the training objectives of the user domain,environment domain, and gesture domain undertake different tasks, namely classification and sequence labeling respectively ??
Methods
  • RFID Communication Model:

  • Define parameters notations:

  • system overview: ??判别式网络是如何指导预测的? 是就用LSTM双向和定义三个loss函数?这部分不理解?

【Qustion 1】how to fuse multimodal RFID data

  • Data Collector Module:
    • Unwrapping Phase:
    • Resampling: tag reply sequence is not uniform in the time domain,due to random RFID tags respond to the reader,the packets loss caused by tag conficts and noise sample 的频率是如何设置 employed Hample identifier to filter out abnormal values in the stream and then resample
    • Constructing RFID Clip: sliding windows length is 20 ,sliding length is 10 ,(为什么这样设置没说,一个动作大概多长时间,根据采样频率来设置滑动窗口的大小会怎样)
    • one reader with n tags :the sliding window data representing

  • Feature Extractor Module: F R ( t ) = F E ( C ( t ) ; θ f e ) F_R^{(t)}=FE(C^(t);\theta _{fe}) FR(t)=FE(C(t);θfe) , θ f e \theta _{fe} θfe denotes the set of all parameters in the feature extractor FE. ?? how to merge, how to flatten? 2-layer CNN filter (1*d)2D convolutional kernel?? the detail of full connector?? Can i design the network on this description??

【Qustion 2】how to recognize gesture before it’s completed?

  • Gesture Sequence Labeler :model the temporal dependencies of the RFID input sequences using RNN+LSTM


i t = θ ( W i ∗ [ h t − 1 , x t ] + b i ) f t = θ ( W f ∗ [ h t − 1 , x t ] + b f ) C t ⃗ = θ ( W C ∗ [ h t − 1 , x t ] + b C ) O t = θ ( W o ∗ [ h t − 1 , x t ] + b o ) C t = f t ∗ C t − 1 + i t ∗ C t ⃗ h t = O t ∗ t a n h ( C t ) 这 是 L S T M 的 c e l l 单 元 中 的 基 本 公 式 i_t=\theta(W_i*[h_{t-1,x_t}]+b_i)\\ f_t=\theta(W_f*[h_{t-1},x_t]+b_f)\\\vec{C_t}=\theta(W_C*[h_{t-1},x_t]+b_C)\\O_t=\theta(W_o*[h_{t-1,x_t}]+b_o)\\C_t=f_t*C_{t-1}+i_t*\vec{C_t} \\h_t=O_t*tanh(C_t)\\这是LSTM的cell单元中的基本公式 it=θ(Wi[ht1,xt]+bi)ft=θ(Wf[ht1,xt]+bf)Ct =θ(WC[ht1,xt]+bC)Ot=θ(Wo[ht1,xt]+bo)Ct=ftCt1+itCt ht=Ottanh(Ct)LSTMcell

  • the probabilities of the K t h K^{th} Kth gesture : 这个概率分布公式含义不懂? !上面表示一个动作概率

G p ( t , k ) = e x p ( h t ∗ W p + b p ) k ∑ k e x p ( h t ∗ W p + b p ) k G_p^{(t,k)}=\frac{exp(h_t*W_p+b_p)^k}{\sum_k{exp(h_t*W_p+b_p)^k}} Gp(t,k)=kexp(htWp+bp)kexp(htWp+bp)k

  • Highly general: θ s l \theta_{sl} θsl is a set of gesture ; G P ( t ) G_P^{(t)} GP(t) is a k*1 prediction probability vector of all the gesture

G p t = L S T M g ( F R ( t ) ; θ s l ) G_p^{t}=LSTM_g(F_R^{(t)};\theta_{sl}) Gpt=LSTMg(FR(t);θsl)

Loss Function: 这个损失函数公式含义不懂
L g ( θ f e , θ s l ) = − 1 N ∑ n = 1 N 1 N ∑ t = 1 T n ∑ k = 1 K G T ( t , k ) l o g G P ( t , k ) L_g(\theta_{fe},\theta_{sl})=-\frac{1}{N}\sum_{n=1}^{N}{\frac{1}{N}}\sum_{t=1}^{T_n}\sum_{k=1}^{K}G_T^{(t,k)}logG_P^{(t,k)} Lg(θfe,θsl)=N1n=1NN1t=1Tnk=1KGT(t,k)logGP(t,k)

  • Using SVM classifier instead of probability threshold .

【Qustion 3】how to extract environment & user invariant features the data streams of the same gesture performed by diverse users in different positions may differ in both spatial and temporal. the feature generated by feature extractor should be unrelated to the users and environments as possible ,we applies two domain discriminators including user discriminator and environment discriminator which maps the representations F R F_R FR to the user predictions and position predictions. Model the user diversity and enviroment discrepancy using BLSTM classifiers

BLSTM ensures the inference about user or position depends on the full sequence with a better capability of temporal modeling.

  • User discriminator: U p = B L S T M u ( F R ; θ u d ) U_p=BLSTM_u(F_R;\theta_{ud}) Up=BLSTMu(FR;θud)
    • Loss Function: L u ( θ f e , θ u d ) = − 1 N ∑ n = 1 N ∑ j = 1 J U T ( j ) l o g U p ( j ) L_u(\theta_{fe},\theta_{ud})=-\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{J}U_T^{(j)}logU_p^{(j)} Lu(θfe,θud)=N1n=1Nj=1JUT(j)logUp(j)
  • Environment discriminator: E p = B L S T M e ( F R ; θ e d ) E_p=BLSTM_e(F_R;\theta_{ed}) Ep=BLSTMe(FR;θed)
    • Loss Function: L e ( θ f e , θ e d ) = − 1 N ∑ n = 1 N ∑ j = 1 J E T ( j ) l o g E p ( j ) L_e(\theta_{fe},\theta_{ed})=-\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{J}E_T^{(j)}logE_p^{(j)} Le(θfe,θed)=N1n=1Nj=1JET(j)logEp(j)

[]optimizing the parameters θ f e , θ s l \theta_{fe} ,\theta_{sl} θfe,θsl**

the features extracted from same kind of gestures performed by different users or in different positions are required to follow the same distribution. the purpose of two domain discriminators is oppsite to our final objective ?? BLSTM 是发现整个动作序列之间的关系,还是不理解

Evaluation

image-20191206100135113

  • General Performance:

  • ComparatoOtherMethods:

  • Data Fusion Influence:

  • UI Model: training method: y we train the User Invariant (UI) model using the samples from & ! 1 volunteers and test the system using the samples from another volunteer, repeating this process for every distinct volunteer 。有个问题,使用留1交叉验证训练,那些它的测试样本也在里面吗?没有看到文章中训练样本和测试样本是怎么分配的

UI训练过程中特征的分布情况:

  • EI Model:all samples are collected from the gestures performed by several users in 12 positions in three weeks. The samples of two users from a same position are in different domains among users but in the same environment and are regarded as performed by the same participant

  • Real Time:

Notes 去加强了解
  • RFID low-cost ,mini-size and battery-free that widely employed
  • T-distributed stochastic neighbor embedding visualization :一种非线性降维算法,非常适用于高维数据降维到2维或者3维,进行可视化.http://www.datakit.cn/blog/2017/02/05/t_sne_full.html
  • BLSTM网络有什么优点??
  • Hample: 对输入向量x进行hampel滤波,检测和删除异常值。对于x的每个样本,该函数计算由样本及其周围六个样本组成的窗口的中值,每边三个。并利用中位数绝对值估计了各样本对中值的标准差。如果某个样本与中值相差超过三个标准差,则用中值替换该样本。如果x是一个矩阵,hampel将x的每一列都看成是独立的通道。
  • paper 22, 3, 6, 9, 24
  • LLRP [7]、
  • RFID is widely used in activity recognition with its stable low-level physical characters such as phase and RSS ,intuitively delineate its movements
  • gesture recognition builds a friendly and straight forward bridge between human and computer compared with text-based and graphic user interface.

level: CCF_B IEEE International Conference on Sensing ,Communication and Networking(SECON)
author: Shigeng Zhang,Chengwei Yang
date: 2016
keyword:


Paper: ReActor


ReActor:Real-time and Accurate Contactless Gesture Recognition with RFID
Summary
  1. use machine learning to distinguish different gestures ,instead of DTW the statics of the signal profile that characterize coarse-grained features and the wavelet(transformation) coefficients of the signal profile that characterize the fine-grained local features
Proble Statement

previous work:

  • RF-Finger[12] uses 35 tags to classify different hand gestures by convolutional neural networks
  • Contact-based Gesture Recognition
    • uWave[20] uses a single three-axis accelerometer sensor to recognize personalized gestures.
    • FEMD[21] uses the Kinect sensor to classify ten different gestures
    • The Magic Ring [7] recognizes different gestures by attaching a ring to the user’s finger
    • Femo[14] recognize the user’s activity during body exercise and assesses the quality of exercise movements
    • ShopMiner[17] and CBid[22] monitor the customer’ behaviors by attaching RFID tags to goods in the surpermarket and recognizing different behavior patterns by tracign motions of tags
    • [23] combine Kinect-based activity recognition and RFID-based user identification to improve the quality of augmented reality application
    • [16] propose an approach to detecting the user’s coarse-grained gesture by attaching tags to goods ,which supports online commenting of goods quality
    • IDSense[24] enables smart interaction between the user and objects by developing an activity detection systems based on RFID
    • [15, 25] using deep learning to recognize user’s body activities
  • Contactless Gesture Recognition
    • WiGest[26] detects basic primitive gesture in a device-free manner
    • E-eyes detects user’s activity at home based on channel state information(CSI)
    • WiFinger[18] detects fine-grained hand gestures based on CSI changes
    • GRfid[11]uses dynamic time warping to match different gestures
Methods
  • Environment:

  • system overview:

  • Signal preprocessing
    • phase unwrapping
    • phase ambiguity processing
    • signal smoothing using Savitzky-Golay (S-G) filter [32]
    • signal normalization (这一步作用是什么)

  • Gesture Segmentation:
    • Varri method

  • Attribute Extract

    • Static Attributes:
      • the mode, median, the first quartile, the third quartile, and the arithmetic mean reflect the central tendency of the data
      • the max, min, variance, standard deviation, the third-order central moment that reflect dispersion of the data
      • skewess, Kurtosis that reflect the distribution shape
    • Wavelet Decomposition Coefficient Attributes:
      • Data interpolation: handle the sample question
      • Wavelet coefficient calculation: use the Daubechies wavelet as the wavelet base to decompose the signal profile of each gesture

Evaluation
  • Environment:
    • Dataset:
  • the Gestures to Recognise

  • the Impact of Tag Number
  • Recognition Latency
  • Impact of Gesture Speed
  • Impact of Operation Distance
Conclusion
Notes 去加强了解
  • Varri method [29] segment different activity: uses a sliding window that combine amplitude measure and frequency measure of the signal
  • Savitzky-Golay (S-G) filter : a method based on local polynomial least square fitting in the time domain ,preserve the shape and width of the raw signal after filtering out noises
  • Signal normalization: can manify the signal changes caused by gestures and meanwhile suppress the impact of background signals by mapping them to values around zeroimage-20191213132452914
  • 学习小波变换算法作用

level: CCF_C WCNC (IEEE Wireless Communications and Networking Conference)
author: Dong Wang Shanghai jiaotong Ust
date: 2018


Paper: SGRS


SGRS:A Sequential Gesture Recognition System using COTS RFID
  • system overview:

Paper1《RF-Based Fall Monitoring Using Convolutional Neural Networks》

cited: keyword: Fall Detection, Device-free, Deep learning

Phenomenon&Challenge:
  1. These revelations have led to new passive sensors that infer falls by analyzing Radio Frequency (RF) signals in homes.
  2. They typically train and test their classifiers on the same people in the same environments, and cannot generalize to new people or new environments
  3. they cannot separate motions from different people and can easily miss a fall in the presence of other motions.
RelatedWork:
  1. proposed systems that transmit a low power RF signal and analyze its reflections off people’s bodies to infer falls [4, 15, 35, 41, 45, 56, 58, 59].

  2. State-of-the-art RF-based fall detection systems can be divided into two categories: The first category is based on Doppler radar [15, 22, 45]. These solutions exploit the relationship between the Doppler frequency and motion velocity. They associate falls with a spike in the Doppler frequency due to a fast fall motion. The second category is based on WiFi channel state information (CSI) [41, 56, 58, 59]. While this category differs in its input signal, it typically relies on the same basic principle.

  3. convolutional neural networks (CNNs) [31], which have demonstrated the ability to extract complex patterns from various types of signals, such as images and videos [16, 20, 30, 51, 52, 57, 60, 62, 63].

  4. wearable devices include accelerometers [12, 33], smart phones [1, 12], RFID [10], etc.non-wearable technologies, cameras [32, 39] are accurate but they invade people’s privacy, Audio and vibration based sensors [5, 34] have a relatively low accuracy due to interference from the environment [38]. Pressure mats and pulling cords work only when the fall occurs near the installed device [37].

  5. past papers on RF-based fall monitoring [15, 35, 41, 45, 56, 58, 59]

  6. Convolutional Neural Networks (CNN) have been the main workhorse of recent breakthroughs in

    understanding images [30], videos [28, 55] and audio signals [7, 53]. object detection [30], image segmentation [43], speech synthesis [53], machine translation [26], and AlphaGo [47]

Contribution:
  1. Dealing with complex falls and fast non-fall movements
  2. Generalization to new homes and new people:
  3. Detect falls in the presence of other motion
  4. first convolutional neural network architecture for RF-based fall detection. Our CNN design extracts complex spatio-temporal information about body motion from RF signals. As a result, it can characterize complex falls and fast non-fall motions, separate a fall from other motions in the environment, and generalize to new environments and people.
  5. multi-function design that combines fall detection with the ability to infer stand-up events and fall duration
  6. an extensive empirical evaluation with multiple sources of motion
Innovation&consolution:
  1. an RF-based fall detection system that uses convolutional neural networks governed by a state machine
  2. works with new people and environments unseen in the training set
  3. FMCW can separate RF reflections based on distance from the reflecting body, and the vertical and horizontal arrays separate reflections based on their elevation and azimuthal angles
Chart&Analyse:
  1. combine two CNNs: the first detects a fall event while the second detects a stand-up event. The two networks are coordinated via a state machine that tracks the transition of a person from a normal state to a fall state, and potentially back to a normal state.

1565400463270

Paper6《TagFree Activity Identifification with RFIDs》

cited: keyword: Network mobility; Sensor networks

Phenomenon&Challenge:
  1. the accuracy of the readings can be noticeably affected by multipath, which unfortunately is inevitable in an indoor environment and is complicated with multiple reference tags.
  2. human activity identification has become a key service in many IoT applications, such as healthcare and smart homes [1].
  3. the peak amplitudes may dramatically change in a short time, which could be filtered out as noises for activity identification.
RelatedWork:
  1. TagFree can further facilitate various smart home applications, e.g., activity-based temperature adjustment in homes or exercise assistant equipment in gyms.

  2. Unfortunately, the activity information inferred from the raw RSSI can be quite unreliable and inaccurate for small movement.

  3. Radio Frequency Identifification (RFID) is a promising technology due to its low cost, small form size,

    and batterylessness, making it widely used in a range of mobile applications, including detection of

    human-object interaction [25], people/object tracking [31] and more complex activity identifification

  4. previous solutions exploited the changing of RSSI (received signal strength

    indicator) [35][5][19] incurred by human actions. Yet RSSI is insensitive to small body movement, and

    thus difficult to achieve high-precision identifification

  5. LSTM networks have been successfully applied to many tasks such as handwriting [9] and speech recognition [10].

  6. RF-compass [24] presented a WiFi-based approach to classify a predefifined set of nine gestures; E-eyes [28] proposed a location-oriented activity identifification system, which utilized WiFi signals to recognize

    in-home human activities; Ding et al. [6] further developed FEMO that uses the frequency shifts of the movements to determine what exercise a user is performing.

Innovation&consolution:
  1. TagFree gathers massive angle information as spectrum frames from multiple tags, and preprocesses them to extract key features. It then analyzes their patterns through a deep learning framework(h Convolutional Neural Network (CNN) [15] and Long Short Term Memory (LSTM) network [13])
  2. Our experiments suggest that both the backscattered signal power and angle are highly related to human activities, impacting multiple paths with different levels.
  3. DataProcessing
    1. Phase calibration different frequencies induce different initial phase-offffsets at the reader. We accordingly design a mechanism to calibrate the phase difference between frequencies
    2. Multipath Decoupling ??? in practice the AoA estimation may not work well because of the multipath effffect.in order to The M higher peaks are of great power [20] and corresponds to the estimated direction of arrival of the signal source with the angles θ1, . . . , θM
Chart&Analyse:

Shortcoming:
  1. Multiple emitter location and signal parameter estimation 待读论文
  2. #A novel connectionist system for unconstrained handwriting recognition
  3. #Speech recognition with deep recurrent neural networks
  4. A platform for free-weight exercise monitoring with rfifids
  5. robot object manipulation using RFIDs
  6. device-free location-oriented activity identifification using fifine-grained WiFi signatures
  7. Beyond short snippets: Deep networks for video classifification.
  8. RF-IDraw: virtual touch screen in the air using RF signals
  9. Relative Localization of RFID Tags using Spatial-Temporal Phase Profifiling
  10. Deep Learning for RFID-Based Activity Recognition
  11. 22,25,6

Paper7《Through-Wall Human Pose Estimation Using Radio Signals

Phenomenon&Challenge:
  1. infeasible when the person is fully occluded, behind a wall or in a different room.
  2. In particular, there is no labeled data for this task. It is also infeasible for humans to annotate radio signals with keypoints.
  3. some body parts may not reflect much RF signals towards our sensor, and hence may be de-emphasized or missing in some heatmaps, even though they are not occluded
RelatedWork:
  1. Computer Vision: Human pose estimation from RGB images generally falls into two main categories: Top-down and bottom-up methods. Top-down methods [16, 14, 29, 15] fifirst detect each people in the image, and then apply a single-person pose estimator to each people to extract keypoints. Bottom-up methods [10, 31, 20], on the other hand, fifirst detect all keypoints in the image, then use post processing to associate the keypoints belonging to the same person
  2. Wireless Systems: Recent years have witnessed much interest in localizing people and tracking their motion using wireless signals. The literature can be classifified into two categories. The fifirst category operates at very high frequencies (e.g., millimeter wave or terahertz) [3].The second category uses lower frequencies, around a few GHz, and hence can track people through walls and occlusions.The second category uses lower frequencies, around a few GHz, and hence can track people through walls and occlusions.
Contribution:
  1. the system uses synchronized wireless and visual inputs, extracts pose information from the visual stream, and uses it to guide the training process.
  2. We first perform non-maximum suppression on the keypoint confifidence maps to obtain discrete peaks of keypoint candidates. To associate keypoints of different persons, we use the relaxation method proposed by Cao et al. [10] and we use Euclidean distance for the weight of two candidates
Innovation&consolution:
  1. we make the network learn to aggregate information from multiple snapshots of RF heatmaps so that it can capture different limbs and model the dynamics of body movement.
Chart&Analyse:

Shortcoming&confusing:
  1. First, the human body is opaque at the frequencies of interest – i.e., frequencies that traverse walls

  2. the operating distance of a radio is dependent on its transmission power

  3. less activity recognize

  4. 待读论文如下

  5. Realtime multiperson 2D pose estimation using part affifinity fifields

  6. 3D convolutional neural networks for human action recognition

  7. Learning spatiotemporal features with 3D convolutional networks.

  8. Temporal segment networks: Towards good practices for deep action recognition.

  9. Microsoft COCO: Com-mon objects in context.

  10. Realtime multiperson 2D pose estimation using part affifinity fifields

Paper8《Compressive Representation for Device-Free Activity Recognition with Passive RFID Signal Strength》

Chart&Analyse:

Paper《Sharing the Load: Human-Robot Team Lifting Using Muscle Activity

RelatedWork:
  1. Human-Robot Interaction using modalities such as vision, speech, force sensors, and gesture tracking datagloves [8], [9], [10], [11], [12].
  2. Using Muscle Signals for Robot Control EMG can yield effective human-robot interfaces, but also demonstrate associated challenges such as noise, variance between users, and complex muscle dynamics.
Contribution:
  1. an algorithm to continuously estimate a lifting setpoint from biceps activity, roughly matching a person’s hand height while also providing a closed-loop control interface for quickly commanding coarse adjustments;
  2. a plug-and-play rolling classififier for detecting up or down gestures from biceps and triceps activity
  3. an end-to-end system integrating these pipelines to collaboratively lift objects with a robot using only muscle activity associated with the task;
Innovation&consolution:
  1. The setpoint algorithm aims to estimate changes in the person’s hand height while also creating a task-based control interface.

Chart&Analyse:

Shortcoming&Question:
  1. don’t understand SetpointAlgorithm?
  2. after feature extraction,what do i get? how to detect up and down categories according to musal activities???

Paper《Emotion Recognition using Wireless Signals

Phenomenon&Challenge:
  1. Emotion recognition is an emerging field that has attracted much interest from both the industry and the research community [52, 16, 30, 47, 23].

  2. measure inner feelings [14, 48, 21]

  3. Recent research has shown that such RF reflfections can be used to measure a person’s

    breathing and average heart rate without body contact [7, 19, 25, 45, 31].

  4. RF signals reflected off a person’s body are modulated by both breathing and heartbeats.

  5. heartbeats in the RF signal lack the sharp peaks which characterize the ECG signal, making it harder to accurately identify beat boundaries

  6. the difffference in inter-beat-intervals (IBI) is only a few tens of milliseconds.

  7. the shape of a heartbeat in RF reflflections is unknown and varies depending on the person’s body and exact posture with respect to the device

RelatedWork:
  1. Existing approaches for inferring a person’s emotions either rely on audiovisual cues, such as images and audio clips [64, 30, 54], or require the person to wear physiological sensors like an ECG monitor [28, 48, 34, 8].
  2. Emotion Recognition:they extract emotionrelated signals (e.g., audio-visual cues or physiological signals); second, they feed these signals into a classififier in order to recognize emotions. Existing approaches for extracting emotion-related signals fall under two categories: audiovisual techniques and physiological techniques.
  3. RF-based Sensing it transmits an RF signal and analyzes its reflflections to track user locations [5], gestures [6, 50, 56, 10, 61, 3], activities [59, 60], and vital signs [7, 19, 20].
  4. past work that does not require users to hold their breath has an average error of 30-50 milliseconds [13, 40, 27]
Contribution:
  1. demonstrates the feasibility of emotion recognition using RF re-flections offff one’s body.
  2. introduces a new algorithm for extracting individual heartbeats from RF reflections offff the human body.
  3. Mitigating the Impact of Breathing && Heartbeat Segmentation
  4. EMOTION CLASSIFICATION:Feature Selection and Classifification
Innovation&consolution:

Chart&Analyse:

Code:
Shortcoming&Question:
  1. 测试环节没有细看跳过。

  2. what’s IBI features???

  3. 待读论文:1-norm support vector machines. Advances in neural information processing systems

  4. machine emotional intelligence: Analysis of affffective physiological state

  5. Comparison of detrended flfluctuation analysis and spectral analysis for heart rate variability

    in sleep and sleep apnea 2003

  6. Sample entropy analysis of neonatal heart rate variability 2002

  7. Emotion recognition based on physiological changes in music listening 2008

  8. Physiological signals based human emotion recognition: a review 2011

  9. An introduction to variable and feature selection 2003

Paper《Interacting with Soli: Exploring Fine-Grained Dynamic Gesture Recognition in the Radio-Frequency Spectrum

cited: keyword: gesture recognition; wearables; deep learning; radar sensing

Phenomenon&Challenge:
  1. sensing in the electro-magnetic spectrum eschews spatial information for temporal resolution. Capturing a superposition of reflflected energy from multiple parts of the hand such as the palm or fingertips, the signal is therefore not directly suitable* to reconstruct the spatial structure or the

    shape of objects in front of the sensor

RelatedWork:
  1. Google Soli resolving motion at a very fifine level and allowing for segmentation in range and velocity spaces rather than image space.
Contribution:
  1. a novel end-to-end trained stack of convolutional and recurrent neural networks (CNN/RNN) for RF signal based dynamic gesture recognition
  2. an in-depth analysis of sensor signal properties and highlight inherent issues in traditional frame-level approaches
Chart&Analyse:

Paper《Compressive Representation for Device-Free Activity Recognition with Passive RFID Signal Strength》

cited: keyword: —Activity recognition, RFID, compressive sensing

Phenomenon&Challenge:
  1. RSSI is quite complicated in real environments due to signal reflection,diffraction,and scattering,especially for the passive RFID tags.
RelatedWork:
  1. many efforts have been made to learn human activities by mining from a broad range of signal sources, such as videos and images [6], radio frequency of wearable or wireless sensors [7], [8], Wi-Fi [9], and even object vibration flfluctuations [10].
  2. Fall Detection
  3. Sleep Monitoring
  4. Ambulatory Monitoring.Posture recognition and monitoring are critical in the medical care.
Contribution:
  1. The system interprets what a person is doing by deciphering signal flfluctuations using radio-frequency identifification (RFID) technology and machine learning algorithms

  2. compressive sensing, dictionary-based approach that can learn a set of compact and informative dictionaries of activities using an unsupervised subspace decomposition.

  3. propose a dictionary learning approach to uncover the structural information between RSSI

    signals of different activities by learning the compact and discriminative dictionaries per activity

  4. model each predefifined human activity by learning discriminative dictionaries and its corresponding sparse coeffificients using features extracted and selected from raw RSSI streams.

  5. develop a compressive sensing dictionary-based learning approach to uncover structural information

    among RFID signals of different activities.

  6. propose a lightweight but effective feature selection method to assist the extraction of more discrimi

    native signal patterns from noisy RFID streams.

Chart&Analyse:

  1. the variations of signal strength reflect different patterns,which can be exploited to distinguish different activities.

  1. 但系统框架没看懂具体怎么处理的,individual segments 使用滑动窗口处理,但不明白在不知道起点的情况下如何更具滑动窗口进行切割。
Shortcoming&Confusion:
  1. Sparse coding is a common technique to model data vectors as sparse linear combinations (i.e., sparse representation) of basis elements, and has been widely used in image processing and computer vision applications [23], [24], [25]. Dontkonw the Sparse coding

Paper《RF-Based 3D Skeletons

cited: keyword: RF Sensing, 3D Human Pose Estimation, Machine Learning, Neural Networks, Localization, Smart Homes

Phenomenon&Challenge:
  1. images have high spatial resolution whereas RF signals have low spatial resolution, even when using multi-antenna systems
  2. only a few body parts are visible to the radio [1]
  3. Existing datasets for inferring 3D poses from images are limited to one environment or one person (e.g., Human3.6M [7])
RelatedWork:
  1. Novel algorithms have led to accurate localization within tens of centimeters [19, 34]. Advanced sensing technologies have enabled people tracking based on the RF signals that bounce off their bodies, even when they do not carry any wireless transmitters [2, 17, 35]. Various papers have developed clas

    sifiers that use RF reflections to detect actions like falling, walking, sitting, etc. [21, 23, 32].

  2. **Wireless Systems:**Different papers localize the people in the environment [2, 17], monitor their walking speed [15, 31], track their chest motion to extract breathing and heartbeats [3, 39, 41], or track the arm motion to identify a particular gesture [21, 23].

  3. **Computer Vision:**2D pose estimation has achieved remarkable success recently [6, 8, 11, 13, 16, 22, 33]. advances in 3D human pose estimation remain limited due to the difficulty and ambiguity of recovering 3D information from 2D images.

Innovation&consolution:
  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes
  2. model the relationship between the observed radio waves and the human body, as well as the constraints on the location and movement of different body parts.
  3. Sensing the 3D Skeleton: common deep learning platforms (e.g., Pytorch, Tensorflow) do not support 4D CNNs we leverage the properties of RF signals to decompose 4D convolutions into a combination of 3D convolutions performed on two planes and the time axis. We also decompose CNN training and inference to operate on those two planes.
  4. Scaling to Multiple People: run past localization algorithms, locate each person in the scene, and zoom in on signals from that location. The drawbacks of such approach are: 1) localization errors will lead to errors in skeleton estimation, and 2) multipath effects can create fictitious people. instead of zooming in on people in the physical space, the network first transforms the RF signal into an abstract domain that condenses the relevant information, then separates the information pertaining to different individuals in the abstract domain
  5. Testing given an image of people, identifies the pixels that correspond to their keypoints [6]. develop a coordinated system of 12 cameras. We collect 2D skeletons from each camera, and design an optimization problem based on multi-view geometry to find the 3D location of each keypoint of each person
Chart&Analyse:

Shortcoming&Confusion:
  1. openpose[6]
  2. 只了解了大概,离具体复现差很多,里面具体的设计细节不清楚

Paper《RF-Dial: an RFID-based 2D Human-Computer Interaction via Tag Array 》

Contribution:
  1. propose a novel scheme of 2D human-computer interaction, by attaching a tag array on the the surface of an ordinary object, thus turning it into an intelligent HCI device

  2. to track the rigid transformation including translation and rotation, we build a geometric model to depict the relationship between the phase variations of the tag array and the rigid transformation of the tagged object. By referring to the fifixed topology of at least two tags from the tag array, we are able to accurately estimate the 2D rigid body motion of the object

  3. implemented a prototype system of RF-Dial with COTS RFID

    and evaluated its performance in the real environment

Innovation&consolution:
  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes.
Chart&Analyse:

Paper《Multi-Target Intense Human Motion Analysis and Detection Using Channel State Information》

lab:Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software,

meeting:In Proceedings of the 2017 IEEE
19th International Conference on e-Health Networking, Applications and Services (Healthcom),=12–15 October 2017.

cited: keyword:

Phenomenon&Challenge:
  1. intense human motion usually has the characteristics of intensity, rapid change, irregularity, large amplitude, and continuity
RelatedWork:
  1. Camera-Based Human Motion Detection crowd counting gesture recognition [3], target tracking [4], violence detection
  2. Wi-Fi-Based Passive Human Detection many research studies have realized passive human detection by leveraging the variance of
    Received Signal Strength Indicator (RSSI) on the receiver [8?11].
  3. Wi-Fi-Based Activity Recognition
Contribution:
  1. finding out the pattern of the relationship between human motion and CSI variation. Then, we extract the feature from CSI to depict different human motion,and use machine learning methods to detect intense human motion from human activities
  2. analyzed the signal variation difference under LOS and NLOS conditions, and then we identify whether the current wireless link status.
  3. human motion detection system which can be deployed on the Wi-Fi APs
Innovation&consolution:
  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes.
Chart&Analyse:

Code:
Shortcoming&Confusion:
  1. Position-independent indicator:
  2. Multiple targets settings:
  3. People counting:
  4. Training-free human motion detection

Paper《Enabling Contactless Detection of Moving Humans

with Dynamic Speeds Using CSI》

RelatedWork:
  1. RSS-based detection
  2. CSI-based detection
  3. Detection as prerequisite.
Contribution:
  1. passive human detection leveraging full information of CSI.
  2. a novel unified feature using the eigenvalue of the correlation matrix of CSI.
    The feature holds excellent properties for device-free detection due to its stability for both amplitude and phase and irrelevance to specific power parameters that vary over different links and over time and space.
  3. space diversity provided by multiantennas supported by modern MIMO com
    municating systems to enable more accurate and robust detection
Innovation&consolution:
  1. using linear transformation on phase information of CSI, we apply phase differences across antennas as a new
    feature.
Chart&Analyse:

Paper《HuAc: Human Activity Recognition Using Crowdsourced WiFi Signals and Skeleton Data》

RelatedWork:
  1. Kinect-Based Activity Recognition keleton joints overlapping, and position-dependence factors.
  2. WiFi-Based Activity Recognition WiFall system [2] detects a fall behavior
    by learning the specific CSI pattern.WiFall system [2] E-eyes [9] recognizes
    walking activity and in-place activity by adopting movingvariance of CSI and fingerprint technique.CARM [10] shows the correlation between CSI
    value andhumanactivityby constructingCSI-speedandCSI
    activity model.
Contribution:
  1. We propose a HuAc system to recognize human
    activity and also construct a WiFi-based activity
    recognition dataset named WiAR as a benchmark to
    evaluate the performance of existing activity recognition
    systems. We use the kNN, Random Forest, and
    Decision Tree algorithms to verify the effectiveness of
    theWiAR dataset
  2. We detect the start and end of the activity using
    the moving variance of CSI. Moreover, we leverage
    𝐾-means algorithm to cluster effective subcarriers
    according to subcarrier’s sensitivity and improve the
    robustness of activity recognition.
  3. We develop a selection method of skeleton joints
    based on KARD’s work named SSJ, and it considers
    the spatial relationship and the angle of adjacent
    joints as auxiliary information of human activity
    recognition to improve the accuracy of tracking.
  4. We implement the fusion framework of CSI and
    skeleton data to sense the activity and solve the
    limitations of CSI-based and skeleton-based activity
    recognition, respectively. Experimental results show
    that HuAc achieves the accuracy of greater than 93%.
Innovation&consolution:
  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes.
Chart&Analyse

Code:
Shortcoming&Confusion:
  1. Data Fusion the balance between CSI and kinect recognition
  2. Extending to Multiple People Activity Recognition
  3. Extending to Shadow Recognition

Paper《In-Air Gesture Interaction: Real Time Hand Posture Recognition Using Passive RFID Tags》

keyword: **author:**Ning Ye (yening@njupt.edu.cn) and Reza Malekian

Phenomenon&Challenge:
  1. smart home to control appliances at home [1], which reduces the
    dependence on remote controllers and mobile terminals.
  2. sign language recognition, gestures can help deaf people or other inconvenient crowds improve their standard of living[2]. Another common application is the Remote Control Robot [3],
  3. how to eliminate the influence of phase wrapping?
  4. how to extract the feature template of each gesture
  5. how to recognize the predefined gestures
RelatedWork:
  1. wearable sensors
  2. computer vision-based systems
  3. previous work poor portability , Low robustness , high cost
  4. Warehouse Management System (WMS) [15], the tracking of target object [16][18] and indoor location [19][23].
  5. RFID indoor location :TagOram [16], LANDMARC [19] and RF-IDraw [23]. LANDMARC
  6. activity recognition :GRfid FEMO
Contribution:
  1. design a gesture recognition system which utilizes available phase from COTs devices to support both static and dynamic gesture recognition
  2. discover unique features differentiating each gesture type.static gestures tend to appear within the time period of phase data stabilization while dynamic gestures occur during the periods of fluctuation.
  3. carry out different normalization and classification schemes on static and dynamic gestures.
Innovation&consolution:
  1. Noe-line of sight identification , energy-free sensing , low cost and lightly carrying.
Chart&Analyse:

analyse of the above three pictures:1.the phase is more reliable and well-regulated than other output parameters from the reader,such as RSSI and Doppler shift.

  1. once the tag position is fixed the raw phase obeys the Gaussian distribution and is barely influenced by the tag orientation.
  2. The phase has linear relation to the distance within an intra-wave and a stable periodicity at inter-wave

Paper《SmartWall: Novel RFID-Enabled Ambient Human Activity Recognition Using Machine Learning for Unobtrusive Health Monitoring》

keyword: AAL(activity assisted living) **author:**George A. Oguntala (g.a.oguntala@bradford.ac.uk)

**level:**Digital Object Identifier 10.1109/ACCESS.2019.2917125

Contribution:
  1. a novel RFID-enabled approach that implements the pervasive nature of UHF
    passive RFID tags to recognize sequential and concurrent activities.
  2. We develop machine learning via multivariate Gaussian algorithm using maximum likelihood estimation to classify and predict the sampled activities.
  3. We conduct comprehensive experiments of various reallife physical activities via ambient sensing for data collection, evaluation and classication.

Paper《RF-ECG: Heart Rate Variability Assessment Based on COTS RFID Tag Array》

keyword: networks,human centered computing author:

level: Proc.ACM Interact Wearable Ubiquitous

**lab:**State Key Laboratory for Novel Software Technology

Phenomenon&Challenge:
  1. detect and extract weak heartbeat signals from RFID tags among multiple interferences caused by human respiration and ambient noises.
  2. how to achieve a fine-grained heart beat estimation for the HRV assessment according to the reflection effect. -> accurate beat segmentation. apply the wavelet-based denoising method to further filter out the ambient noise signals outside the frequency band of heart rate .PCA-based scheme to derive a template to depict the inter beat signals and use it to iteratively perform the IBI segmentation.
  3. how to understand the sensing mechanism of RFID tag array and leverage the RFID tag array to perform accurate sensing .
RelatedWork:
  1. Heart rate variability is widely used for general health evaluation .ECG suffer form low accuracy and limited battery life.FMCW utilize dedicated device for mearsurement
  2. Heart Rate Variability represents the variation of the time interval between adjacent heartbeats.HRV reflects hwo the cardiovascular regulatory system responds to demands stress and illness,quantitatively measure physiological and mental changes during treatment.
  3. ECG requires direct skin contact ,indicating some people need to remove the chest hair to achieve better signal quality.
  4. sensor based heartbeat detection PPG
  5. RF-based heartbeat detection , Radar based approaches FMCW,doppler radar are accureate at measuring such tiny environmental changes ,wifi based detection but they can’t label the subject
Contribution:
  1. leverage the RFID tag array to perform accurate sensing on HRV
  2. conducted in depth investigation on the sensing mechanism of RFID tag array,to capture the relationship between the RF-signal from the tag array and corresponding movement from the heart beat or respiration.
  3. an algorithms use to extract the HRV from the RF-signals mixed with heartbeat signals ,respiration signals and ambient noises.
Chart&Analyse:

Paper《Spin-Antenna 3D Motion Tracking for Tag Array Labeled Objects via Spinning Antenna》

keyword: author:

level:

**lab:**State Key Laboratory for Novel Software Technology,

Challenge:
  1. how to accurately estimate the 3D motion of the tag array ,including the translation and thee rotation,
  2. how to tackle the variation of signal features when spinning the antenna and use these features to derive six degrees for the freedom for the tag array.
  3. relationship between the signal feature variations and the matching/mismatching direction fo the antenna-tag pair.
RelatedWork:
  1. HCI approaches mainly fall into three categories,the computer vision(CV)-based ,sensor-based,sensorless approaches.
  2. previous work tracks the motion of the tagged object only in the 2D space ,Tagyro tracks the orientaion of a tagged object via multiple antennas ,but not able to track the absolute translation of the object simultaneously. Tag-compass estimates the orientation of one tag based on multiple spinning antennas,but only based on the precondition that the tagged object is deployed in a specified 2D plane.
Chart&Analyse:

the analysis of the above pictures are as follows:

​ 1. during the spinning precess ,in comparison to the circularly polarized antenna, the phase variation of the linearly polarized antenna is more stable ,and the RSSI variation of the linearly polarized antenna is more distinctive.

2.	for the linearly polarized antenna,the mismatching direction ,corresponding to the minimum RSSI value is more distinctive for the estimation of the tag orientation than the matching direction ,corresponding to the maximun.3.	For the linearly polarized antenna, the phase value keeps stable when the polarization direction of the antenna matches the tag orientation perfectly; and the phase value fluctuates when the polarization direction mismatches the tag orientation due to the multi-path effect4.	<font color="red">the linearly polarized antenna can capture the more stable phase value and distinctive RSSI variance compared to the circular polarized antenna. use linearly polarized antenna to estimate the position with the phase value ,and estimate the orientation with RSSI   2.the mismatching direction based on the  RSSI variance is more distinctive to estimate the tag orientation compared to the matching direction.   3. the phase arround the matching direction is more stable ,which can be used to calibrate the phase value by removing the noisy phase around the mismatching direction.</font>

ly

Shortcoming&Confusion:
  1. 4.8.9.11.15 文章阅读下

level: MobiCom2017
author:
date: ‘2017-09-05’
keyword:

  • RFID,Touch Interface,Mutual Coupling,Impedance Tracking

Paper: RIO

RIO: A Pervasive RFID-based Touch Gesture Interface
Summary

利用手在标签上划过,观察由于阻抗带来的相位变化,使用DWT 时序匹配算法来判断是否有划过事件。 问题:只是检测了手指在一个Tag上划过?如果不是沿标签方向划过会怎么样?

Research Objective
  • Application Area: 使用天线耦合产生阻抗检测手指在那个标签上划过。
  • Purpose: design and develop RIO,battery-free touch sensing user interface primitive for future IOT and smart spaces.
Methods
  • how to detect touch event on a single tag ?

Evaluation
  • Environment:Imping R420 reader continuously queries the tags in range(at_200reads/second),recording the RF phase of all RFID responses to get time series of phase readings for each individual tag. the camera is time synchronized with the reader control software.

Conclusion
  • as a reliable primitive for Touch Sensing ,the impedance of the RFID antenna will vary in response to physical touch 2. the amount of variantion depends on the location of the physical contact with the antenna
  • making Rio resilient in a multi-Tag environment
  • a touch and gesture UI primitive for smart space ,robust touching and gesture sensing
Notes
  • 标签很近的时候会有coupling effect。
  • 一个场景下会产生什么事件—》对应的可区分现象是什么-------》现象是否有很好的区分性-----》是回归模型还是分类模型还是时序分析DWT算法联想到相关的模型----》相似的研究有哪些可能的影响不足-----》

Paper《Multi-Touch in the Air: Device-Free Finger Tracking and Gesture Recognition via COTS RFID》

keyword: author:

**level:**IEEE INFOCOM 2018

**lab:**State Key Laboratory for Novel Software Technology

Phenomenon&Challenge:
  1. how to track the trajectory of the finger writings, model the impact of the moving finger on the tag array to extract the reflection features.
  2. how to recognize the multi-touch gesture? regard the multiple fingers as a whole for recognition and then extract the reflection feature of the multiple fingers as images. using CNN model to classify.
  3. how to abtain signal quality from the tag array? utilize a signal model to depict the mutual interference between tags.
RelatedWork:
  1. RF-finger focuses on tracking the finger trace and recognizing the multi-touch gestures.
Contribution:
  1. presented RF-finger a device-free system based on COTS RFID,which leverage a tag array on a letter-size paper to sense the finge-grained fingermovements perform in front of paper.
  2. focus on two kinds of sensing modes:finger tracking recovers the moving trace of finger writings.multi-touch gesture recognition identifies the multi-touch gestures involving multiple fingers.
  3. investigate the impact of tag array deployment on the signal quality. We analyze the mutual interference between tags via a signal model and provide recommendations on tag deployment to reduce the interference
Chart&Analyse:

1571900383616

Paper《ShopMiner: Mining Customer Shopping Behavior inPhysical Clothing Stores with COTS RFID Devices》

Phenomenon&Challenge:
  1. Popular Item represents the clothes frequently
    viewed by customers.Since customers pay more views
    on items that meet their tastes, popular category data
    reveal customers’ flavor, hence providing valuable
    information to retailers’ trading strategy.
  2. Hot items are the clothes frequently picked up or turned over by customers.hot items reveal whether customers show deeper interest in items after their first glance.
  3. Correlated items are the clothes that are frequently matched with or tried on together which can facilitate retailers to infer customer shopping habits and adopt bundle-selling strategies to boost profit.
RelatedWork:
  1. camera based requre densely deployed cameras
  2. video based are susceptible to non-line of sight
  3. mining hot zones and popular products.
Contribution:
  1. We design Shop Miner, a framework that harnesses these unique spatial-temporal correlations of time-series phase readings to detect comprehensive shopping behaviors (look at,pick up,turn over desired item )
Chart&Analyse:

influenced by noise ,the phase values fluctuate continuously and form a Gaussian-like distribution.

Paper《Demo: IMU-Kinect: A Motion Sensor-based Gait Monitoring System for Intelligent Healthcare》

keyword: Gait rehabilitation author:

**level:**2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing
and the 2019 International Symposium on Wearable Computers (UbiComp/ISWC ’19 Adjunct),

**lab:**State Key Laboratory for Novel Software Technology

Phenomenon&Challenge:
  1. Gait rehabilitation is a common method of postoperative recovery,which assists the patients to learn how to walk after sustaining an injury or disability.
  2. wearable sensors reported in[8], such as pressure sensors and shoe sensors in [7]. The gait parameters can be easily extracted from measurements of wearable sensors, but we can’t obtain the changing trace of lower limbs because these sensors only provide patchy measurements.
RelatedWork:
  1. Computer vision-based solutions in [2] and [6] can directly track the movements of lower limbs, but it is difficult to calculate gait parameters efficiently because such calculations require high performance devices and training data
Contribution:
  1. IMU-Kinect track the movements of lower limbs and estimate the gait parameters ,basic idea is that we estimate the rotation and displacement of thighs and shanks based on the Inertial Measurement Unites
  2. Gait Paremeters Estimation : temporal parameters (swing time,stance time ,stride time) 这几个time还不明确, Spatial parameters(step length,stride length,stride width)
  3. The gait phases are defined by consecutive occurrences of foot strike (FS), flat foot (FF), heel
    off (HO) and toe off (TO) [5].
Chart&Analyse:

level: just know the idea
author: Departmentof Electrical Engineering, City University of Hong Kong, Tat Chee Avenue
date:
keyword:

  • Action Recognition

Paper: IMU&Acoustic wrist


Multimodal hand gesture recognition using single IMU and acoustic measurements at wrist
Summary
  1. investigate the use of acoustic signals with accelerometer and gyroscope at the human wrist.
Research Objective
  • Application Area: gesture recognition
  • Purpose: recognize 10 daily activity gesture
Proble Statement

previous work:

  • cameras, sensors gloves, muscle-based gadgets[1], surface electromyography(sEMG), optical sensor, accelerometer and gyroscope,
Methods
  • Problem Formulation:

  • system overview:

recorded 10 acoustic channels and 6 channels of IMU data (10 microphones, three-axis accelerometer, and three-axis gyroscope) from the wrists of 10 subjects. These subjects performed 1 trials for each of the 13 daily life gestures: hand lift, hands up, thumbs up/down, single/double tap,hand/finger swipe, okay sign, victory sign, and fist.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
### 回答1: 这是一个表示条件语句的表达式,其中SENSING_OPTION是一个变量或宏定义,ADC_BASED_SENSING是一个常量或宏定义。它的意思是当SENSING_OPTION的值等于ADC_BASED_SENSING时,条件语句成立。具体来说,它可能用于在嵌入式系统中控制传感器采集数据的方式,当SENSING_OPTION等于ADC_BASED_SENSING时,表示采用基于模拟数字转换器的方式进行数据采集。 ### 回答2: SENSING_OPTION == ADC_BASED_SENSING的意思是传感选项等于基于模拟数字转换(ADC)的传感。在电子技术中,模拟数字转换是一种将模拟信号转换为数字信号的过程。传感器常常会产生模拟信号,通过使用ADC,可以将这些模拟信号转换为数字信号,这样可以更方便地进行处理和分析。 SENSING_OPTION是一个表示传感选项的变量或参数,当它的值等于ADC_BASED_SENSING时,表示选择了基于ADC的传感技术。这意味着在这个应用或系统中,使用ADC作为主要的信号接收和处理方式。 基于ADC的传感可以应用在各种领域,比如环境监测、电力系统、医疗设备等。通过ADC,传感器可以将物理量(如温度、湿度、压力等)转换为数字信号,并通过数字处理单元进行进一步的处理、存储和分析。这样可以提高系统的精确度、可靠性和灵活性。 总而言之,SENSING_OPTION == ADC_BASED_SENSING表示选择了基于模拟数字转换的传感技术,利用ADC将传感器采集到的模拟信号转换为数字信号进行后续的处理和分析。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

路途…

点滴记录

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值