Sensor fusion and input representation for time series classification using deep nets

In this short blog post, I will cover some of the ideas for sensor fusion i.e. combining data from multiple sensors and different ways to represent (sensor) input data for deep neural networks.

In the last post, we saw how to apply convolutional neural network on accelerometer data for human activity recognition. The input data had three components (x, y and z) from an accelerometer. The sliding window approach was applied to get segments of fixed size with class labels, that fed into a deep net for activity recognition. The depth wise convolution operation was applied to the input, which learns different weights for different input channels, in our case different accelerometer components. The learned features from convolution and pooling layers are then fed into a feed-forward neural network for classification. Another way to represent input data for the convolutional neural network is to keep x, y and z components separate and apply separate convolution and/or pooling operation to learn different features independently [1]. At a later stage, the output of convolution or pooling layers will be flattened and combined. These new features then feed into densely connected layers for classification. Likewise, another idea is to apply FFT or do spectrogram analysis on accelerometer components and feed new representation as input into a deep net. The spectrogram basically represents changes in energy content of a signal as a function of frequency and time. These representations of the raw signal can bring advantages, to learn interesting features by reducing the complexity of the task. To get more information, please consult [2]. Similarly, if you have a dataset with multiple accelerometer sensors having same sampling rate. The 2D segments (like images) can be extracted, where either each row or column can represent x, y and z components from each of the sensor. I would highly recommend interested reader to check following papers [3], [4]. Last but not least, if the data is from multiple sensors having different sampling rates. The first thing to do is to always time align the dataset. Afterwards, a different convolutional neural network can be applied independently on each sensor’s data to learn features. These learned features are then combined and can be feed into LSTM to learn interaction between different sensors. More information on this approach can be found in [5]. I discussed some of the ideas and techniques I picked while reading papers, if you have more interesting thoughts, suggestion or feedback, please comment below.

  1. Cui, Zhicheng, Wenlin Chen, and Yixin Chen. "Multi-scale convolutional neural networks for time series classification." arXiv preprint arXiv:1603.06995 (2016).
  2. Alsheikh, Mohammad Abu, et al. "Deep activity recognition models with triaxial accelerometers." arXiv preprint arXiv:1511.04664 (2015).
  3. Hammerla, Nils Y., Shane Halloran, and Thomas Ploetz. "Deep, convolutional, and recurrent models for human activity recognition using wearables." arXiv preprint arXiv:1604.08880 (2016).
  4. Yang, Jianbo, et al. "Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition." IJCAI. 2015.

  1. Yao, Shuochao, et al. "DeepSense: A Unified Deep Learning Framework for Time-Series Mobile Sensing Data Processing." arXiv preprint arXiv:1611.01942 (2016).

原文地址: http://aqibsaeed.github.io/2017-01-21-sensor-fusion-and-input-representation/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
x## deepHAR Code repository for experiments on deep architectures for HAR in ubicomp. Using this code you will be able to replicate some of the experiments described in our IJCAI 2016 paper: ``` @article{hammerla2016deep, title={Deep, convolutional, and recurrent models for human activity recognition using wearables}, author={Hammerla, Nils Y and Halloran, Shane and Ploetz, Thomas}, journal={IJCAI 2016}, year={2016} } ``` ## Disclaimer This code is still incomplete. At the moment only the bi-directional RNN will work on the opportunity data-set. ## Installation ``` git clone https://github.com/torch/distro.git ~/torch --recursive cd ~/torch; bash install-deps; ./install.sh # after installation, we need some additional packages #HDF5 luarock sudo apt-get install libhdf5-serial-dev hdf5-tools git clone https://github.com/deepmind/torch-hdf5 cd torch-hdf5 luarocks make hdf5-0-0.rockspec LIBHDF5_LIBDIR="/usr/lib/x86_64-linux-gnu/" # json luarocks install json # RNN support luarocks install torch luarocks install nn luarocks install dpnn luarocks install torchx luarocks install rnn # we use python3 pip3 install h5py pip3 install simplejson pip3 install numpy ``` ## Usage First download and extract the Opportunity dataset. Then use the provided python script in the `data` directory to prepare the training/validation/test sets. ``` cd data python3 data_reader.py opportunity /path/to/OpportunityUCIDataset ``` This will generate two hdf5-files that are read by the lua scripts, `opportunity.h5` and `opportunity.h5.classes.json`. To train the bi-directional RNN that we have found to work best on this set run the following commands: ``` cd models/RNN th main_brnn.lua -data ../../data/opportunity.h5 -cpu \ -layerSize 179 -maxInNorm 2.283772707 \ -learningRate 0.02516758 -sequenceLength 81 \ -carryOverProb 0.915735543 -numLayers 1 \ -logdir EXP_brnn ``` This will train a model only using your CPUs, which will take a while (make sure you have some form of BLAS library installed). On my laptop this will take approx. 5 min per epoch, and it will likely not converge before epoch 60. If your environment is set up for gpu-based computation, try using `-gpu 1` instead of the `-cpu` flag for a significant speedup. ## Other models The python-based `data_reader.py` is new and substitutes for the original but unmaintainable Matlab-scripts used previously. So far it only supports `opportunity` and sample-based evaluation, which will be addressed shortly.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值