转载:Compressive Sensing: The Big Picture

Compressive Sensing: The Big Picture

A living document trying to paint the Big Picture in the Compressed Sensing or Compressive Sensing Framework.

[Wired/Slashdot readers, you may want to read this dedicated blog entry first!

Table of Content:




Compressed_sensing-medium-init-.jpg?height=240&width=320

0. Why this Living Document ?
Compressed Sensing or  Compressive Sensing is about acquiring and recovering a sparse signal in the most efficient way possible (subsampling) with the help of an incoherent projecting basis. Unlike traditional sampling methods,  Compressed Sensing provides a new framework for acquiring sparse signals in a mutiplexed manner. The main theoretical findings in this recent field have mostly centered on how many multiplexed measurements are necessary to reconstruct the original signal and the attendant nonlinear reconstruction techniques needed to demultiplex these signals. Another equally important thrust in the field has been the actual building of  sensing hardware that could produce directly the multiplexed signals.
Information on this technique is growing fast and only few specialists understand how each of these pieces fit each other within the big picture. The Nuit Blanche blog ( RSS feed  is here) provides daily update on new papers/preprints and ideas coming into the field while this document is slated to be a little less active because the big picture should not change everyday. While currently much emphasis is rightfully spent on performing faster reconstruction of the original signal from the Compressed Measurements, we are also beginning to see the emergence of other tools that complete this framework. 

These tools include

  • the ability to search for bases or dictionaries in which sets of signals can be decomposed in a sparse manner and, 
  • the ability to find and quantify specific measurements tools that are incoherent with said dictionaries.
In other words, once a signal is known to be sparse in a specific basis, one of the main challenge is to find a set of measurement tools (producing the compressed measurements) and the attendant nonlinear solver that reconstructs the original full signal. There are theoretical results yielding the minimum number of required measurements needed to produce the original signal given a specific pair of measurement matrices and nonlinear solvers. In all cases, the expected number of compressed measurements is expected to be low relative to traditional Nyquist sampling constraints. This living document aims at categorizing all these tools for the purpose of providing a rapid adoption by the community.
The next paragraphs try to bring the most updated list of different theoretical and applied endeavors implemented in this framework. The fourth paragraph tries to give an exhaustive list of hardware implementations using Compressed Sensing or Compressive Sampling. The fifth paragraph lists a search engine on the subject and the sixth paragraph provides a calendar of activities in the field. If you feel it is not accurate or missing elements please bring them to  my attention. Thanks!. This document tries to summarize most of the information found in:


1. Understanding Compressed Sensing

First and foremost, you may want to check  this page entitled teaching compressive sensing. If you are new to the field and want to own a program that is subsampling a signal and reconstruct it exactly, then
I have also featured a series of riddles and (magic) tricks that can make Compressive Sensing easier to understand in the  CS Riddles page. For more comprehensive examples, you may want to dwell directly into  Sparco and its attendant  set of examples.
Once you are convinced it works and want to dwell into it, try to watch the excellent lectures videos:


In the same vein, there is a nice  tutorial presentation on Compressed Sensing by  Richard BaraniukJustin Romberg, and  Michael Wakin as well as a more in-deph tutorial entitled  Compressive sensing - Theory and applications by  Petros BoufounosJustin Romberg and  Richard Baraniuk.[ There is now a video of  Richard Baraniuk showing his latest introduction to Compressed Sensing at Microsoft Research on August 4, 2008. Please be aware, that the address does not seem to work with Firefox, Chrome or Opera (I tried), it only works with Internet Explorer. The original link that can be seen from any browser is  here.] Other presentations that might provide insights include:

Additionally there are two courses that might be complementary to these presentations: 

Now onto the important stuff

2. Dictionaries for Sparse Recovery (How to Make or Find them)

When a signal is said to be sparse in an engineering sense, it really means that the signal is compressible, i.e. it can be expanded in either a small number of terms or in a series with significantly decaying coefficients. In order to produce Compressed Measurements, one first need to know what is the family of functions in which the signal of interest is sparse. Depending on the case, one might be lucky and know that the signal is sparse in a basis found in harmonic analysis (2.1) or one may have to spend some work in devising what these sparse basis is through an algorithm dedicated to finding sparse dictionaries from a set of signal examples(2.2 and 2.3). Finally,  Remi Gribonval and  Karin Schnass produce some estimate in Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation on the number of training examples needed to build a dictionary.

2.1 Basis Functions for which signals are either sparse or compressible

2.2 Algorithms that find sparse dictionaries are presented in:



Let us note the  Matlab Toolbox Sparsity by  Gabriel Peyre that has implemented some of these techniques. Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries. 

For a review of the state of the art on the subject on how to compile dictionaries from training signals and attendant theoretical issues, check the following document by  Remi Gribonval for his  Habilitation a Diriger Des Recherches entitled:  Sur quelques problèmes mathématiques de modélisation parcimonieuse translated into Sparse Representations: From Source Separation to Compressed Sensing. There is a  video and in an audio only format of this presentation in French. The accompanying slides in English are  here.

2.3 Data Driven Dictionaries

The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary. In this setting, one can list: 

Some of these techniques are being used for dimensionality reduction, which in effect is stating that datasets are compressible when being represented with these dictionaries.

3. Compressed Sensing Measurement / Sparse Signal Encoding


In this segment, the emphasis is on finding a means of acquiring sparse signals using projections onto bases that are incoherent with the bases found in the  previous paragraph. However, a criterion for  checking whether a specific measurement or encoding matrix will allow the recovery of a sparse solution is needed. For that purpose, an early argument was to expect that certain families of matrices to follow the Restricted Isometry Property (RIP). It is however only a sufficient condition and given a matrix, it is NP-hard to find out if that matrices follows that property. It is also known that the RIP property is also too strict (as  shown by  Jared Tanner in  Phase Transition Phenomenon in Sparse Approximation).

3.1 Measurement Matrix Admissibility Conditions

3.1.1 The Donoho-Tanner Phase Transition



The phase transition diagram that I call the  Donoho-Tanner phase transition is probably the only good means of figuring out if a given measurement / encoding matrix can provide good compressive capabilities. One should also note that the phase transition not only depends on the encoding matrix but also on the recovery algorithm used. The following websites provide an interactive way of determining: 

phase-diagram-equivalence.JPG?height=266&width=400


3.1.2 Other Properties (NSP, RIP,....)

Here is a list of properties that measurement matrices must have in order to enable sparse recovery. Let us note that most of them are sufficient conditions:




Other NP-Hard properties: 
Other papers/presentations of interest include:

3.2 Non-Deterministic and Non Adaptive Measurement matrices / Encodings.

The first encoding matrices follow the RIP(2) property while the most recent sparse encoding matrices follow the RIP(1) property. In the following, the first three measurement matrices are pasted from  Terry Tao site:

RIP(2)



RIP(1)
RIP_p

3.3 Non-Deterministic and Adaptive Measurement codes/matrices as mentioned here (as well as new ones).

3.4 Deterministic Subsampling

3.4.1 Deterministic Subsampling with no prior information

3.4.1.1 Fourier <-> Diracs

3.4.1.2 Other pair of bases

3.4.2 Deterministic Subsampling using additional prior information (besides sparsity)



Obviously, with the advent of Data Driven Dictionaries, I would expect we will have specific domain specific measurement matrix methods.


Please note: the most recent solvers are at the end of this list ]

For Authors: If your solver is not listed here, please contact me]

The following is a list of nonlinear solvers used to reconstruct the original signal from its compressed measurements / projections on an incoherent basis (as listed above in paragraph 2). Reconstruction codes span a wide series of techniques that include Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian, Iterative Thresholding, Proximal- . These links generally go to the Matlab toolbox implementing these techniques. Originally, this list was similar to the list found in the excellent  Rice Repository.

5. Hardware / Compressive Sensing Sensor implementations

Most of these entries are summarized in the  Compressive Sensing Sensor / Hardware page. You should go to that page in order to see updated information. One should notice a difference between hardware that does not require modification but different operating procedures and totally new hardware. Hardware implementations vary according to what the hardware can already do. In the case of MRI, one is directly sampling in the Fourier Space and therefore it is directly amenable to subsampling in the Fourier world. Other instrumentations do not sample in the Fourier space and one is led to think of inventive schemes to do that. The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.
原文:https://sites.google.com/site/igorcarron2/cs#why

转载于:https://www.cnblogs.com/sunshineQu/archive/2011/10/21/2219937.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值