A living document trying to paint the Big Picture in the Compressed Sensing or Compressive Sensing Framework.
[Wired/Slashdot readers, you may want to read this dedicated blog entry first!]
Table of Content:
1. Understanding Compressive Sensing
2. Dictionnaries for Sparse Recovery (how to make or find them)
3. Compressive Sensing Measurement Matrices
4. Sparse Recovery Solvers / Reconstruction Techniques (codes)
5. Compressive Sensing Sensor Implementations
These tools include
- the ability to search for bases or dictionaries in which sets of signals can be decomposed in a sparse manner and,
- the ability to find and quantify specific measurements tools that are incoherent with said dictionaries.
- The Nuit Blanche blog: Daily news on the subject of compressive sensing.
- Compressive Sensing Sensors / Hardware
- List of Compressive Sensing Classes offered at different universities.
- Local Compressive Sensing codes: Codes produces or hosted on my site.
- Compressive Sensing Online Talks / Videos: all the videos presenting the subject.
- Compressive Sensing Jobs: a list of jobs requiring some understanding of compressive sensing.
- Compressive Sensing Outreach: Sites and Blogs that are presenting research projects / examples or discussion on Compressive Sensing.
- Compressive Sensing Community website.
1. Understanding Compressed Sensing
First and foremost, you may want to check this page entitled teaching compressive sensing. If you are new to the field and want to own a program that is subsampling a signal and reconstruct it exactly, then- go there and try the "How to wow your friends" matlab code (more info can be found in this entry)
- An illustration of compressive sensing in audio can be found in Laura Balzano's page featured in an NPR All Tech Considered episode entitled The Path from Syphilis to Faster MRIs (the page has the attendant examples and Matlab files).
In the same vein, there is a nice tutorial presentation on Compressed Sensing by Richard Baraniuk, Justin Romberg, and Michael Wakin as well as a more in-deph tutorial entitled Compressive sensing - Theory and applications by Petros Boufounos, Justin Romberg and Richard Baraniuk.[ There is now a video of Richard Baraniuk showing his latest introduction to Compressed Sensing at Microsoft Research on August 4, 2008. Please be aware, that the address does not seem to work with Firefox, Chrome or Opera (I tried), it only works with Internet Explorer. The original link that can be seen from any browser is here.] Other presentations that might provide insights include:
- Tutorial on Compressed Sensing (or Compressive Sampling or Linear Sketching) by Piotr Indyk
- Combinatorial group testing and signal recovery (an instance of group testing) by Anna Gilbert
- Terry Tao making the connection between compressed sensing and the 12 coins problem (another instance of group testing).
- For a state of the art presentation on Sparse Representations one can read, watch or listen: From Source Separation to Compressed Sensing (video and audio only are in French, the accompanying slides in English are here) by Remi Gribonval.
- Compressive Sensing by Mark Davenport, Richard Baraniuk, and Ronald DeVore
- Sketching, Streaming and Sub-linear Space Algorithms, by Piotr Indyk
2. Dictionaries for Sparse Recovery (How to Make or Find them)
When a signal is said to be sparse in an engineering sense, it really means that the signal is compressible, i.e. it can be expanded in either a small number of terms or in a series with significantly decaying coefficients. In order to produce Compressed Measurements, one first need to know what is the family of functions in which the signal of interest is sparse. Depending on the case, one might be lucky and know that the signal is sparse in a basis found in harmonic analysis (2.1) or one may have to spend some work in devising what these sparse basis is through an algorithm dedicated to finding sparse dictionaries from a set of signal examples(2.2 and 2.3). Finally, Remi Gribonval and Karin Schnass produce some estimate in Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation on the number of training examples needed to build a dictionary.2.1 Basis Functions for which signals are either sparse or compressible
- Fourier, Polynomials, etc...
- All kinds of wavelets and higher dimensional related functions (a few are listed in Where is the Starlet)
2.2 Algorithms that find sparse dictionaries are presented in:
- Online Learning for Matrix Factorization and Sparse Coding by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro [The code is released as SPArse Modeling Software or SPAMS]
- Dictionary Learning Algorithms for Sparse Representation (Matlab implementation of FOCUSS/FOCUSS-CNDL is here)
- Multiscale sparse image representation with learned dictionaries [Matlab implementation of the K-SVD algorithm is here, a newer implementation by Ron Rubinstein is here ]
- Efficient sparse coding algorithms [ Matlab code is here ]
- Non-negative Sparse Modeling of Textures (NMF) [Matlab implementation of NMF (Non-negative Matrix Factorization) and NTF (Non-negative Tensor), a faster implementation of NMF can be found here, here is a more recent Non-Negative Tensor Factorizations package]
- Shift Invariant Sparse Coding of Image and Music Data. Matlab implemention is here
- MoTIF : an Efficient Algorithm for Learning Translation Invariant Dictionaries, but also Learning Multi-Modal Dictionaries.Also, more recent:Shift-invariant dictionary learning for sparse representations: extending K-SVD.
- Thresholded Smoothed-L0 (SL0) Dictionary Learning for Sparse Representations by Hadi Zayyani, Massoud Babaie-Zadeh and Remi Gribonval.
Let us note the Matlab Toolbox Sparsity by Gabriel Peyre that has implemented some of these techniques. Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries.
For a review of the state of the art on the subject on how to compile dictionaries from training signals and attendant theoretical issues, check the following document by Remi Gribonval for his Habilitation a Diriger Des Recherches entitled: Sur quelques problèmes mathématiques de modélisation parcimonieuse translated into Sparse Representations: From Source Separation to Compressed Sensing. There is a video and in an audio only format of this presentation in French. The accompanying slides in English are here.
2.3 Data Driven Dictionaries
The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary. In this setting, one can list:
- Geometric harmonics (as demo-ed here)
- Diffusion Wavelets ( Matlab code for Diffusion Geometry and Diffusion Wavelets. )
- Treelets. A code in Matlab is available here.
3.1 Measurement Matrix Admissibility Conditions
3.1.1 The Donoho-Tanner Phase Transition
The phase transition diagram that I call the Donoho-Tanner phase transition is probably the only good means of figuring out if a given measurement / encoding matrix can provide good compressive capabilities. One should also note that the phase transition not only depends on the encoding matrix but also on the recovery algorithm used. The following websites provide an interactive way of determining:
- Bounds for the Restricted Isometry Constants ( use results from Improved bounds on restricted isometry constants for Gaussian matrices, byBubacarr Bah.and Jared Tanner)
- The Donoho-Tanner phase transition in Phase Transitions of the Regular Polytopes and Cone, ( from Precise Undersampling Theorems; and from Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing; by David L. Donoho and Jared Tanner)
3.1.2 Other Properties (NSP, RIP,....)
Here is a list of properties that measurement matrices must have in order to enable sparse recovery. Let us note that most of them are sufficient conditions:- NSP: (Null Space Property) as featured in David Donoho in Compressed Sensing and Compressed sensing and best k-term approximationby Albert Cohen, Wolfgang Dahmen, and Ronald DeVore. Sufficient conditions for the Null Space Property can be computed with the following two approaches (JN and AE):
- NSP: Anatoli Juditsky and Arkadii Nemirovski found a sufficient condition that can be computed in On Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization. The code necessary to perform this verification is located at:http://sites.google.com/site/testsensingmatrices/ ( more information can be found here)
- NSP: Alexandre d'Aspremont and Laurent El Ghaoui are computing the same constant as JN with SDP in Testing the Nullspace Property using Semidefinite Programming. The NSProp implementation is now available here: http://www.princeton.edu/~aspremon/NSPcode.htm
- SCSP, WCSP, WP: as featured in A Remark on Compressed Sensing by Kashin and Vladimir Temlyakov
- Incoherence: as shown in Sparsity and incoherence in compressive sampling by Emmanuel Candès and Justin Romberg.
- RIP In Combining Geometry And Combinatorics: A Unified Approach to Sparse Signal Recovery by Radu Berinde, Anna Gilbert, Piotr Indyk,Howard Karloff and Martin Strauss , the authors show the possibility of unifying the different measurement matrices found so far. Hence, the Restricted Isometry Property can be broken in RIP(1) or RIP(2) properties. However, when one talks about RIP, one generally refers to RIP(2). Even though, this property is NP-hard to prove, an attempt to provide some bound of the Restricted Isometry Constant can be found in: Testing the Nullspace Property using Semidefinite Programming by Alexandre d'Aspremont and Laurent El Ghaoui ..Two more recent papers supply bounds for the Gaussian ensemble: Compressed Sensing: How Sharp is the Restricted Isometry Property ? by Jeffrey Blanchard, Coralia Cartis, and Jared Tanner and Improved Bounds on Restricted Isometry Constants for Gaussian Matrices by Bubacarr Bah and Jared Tanner.
- StRIP: As featured by Robert Calderbank, Stephen Howard and Sina Jafarpour in Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property. This property is computable as mentioned here but it seemed to be restricted to a certain set of encoding matrices.
- SRIP: As featured by Shamgar Gurevich and Ronny Hadani in Incoherent dictionaries and the Statistical Restricted Isometry property.
- GRIP: As featured by Jarvis Haupt and Robert Nowak in this technical report entitled A generalized restricted isometry property (more here)
- RIP_p: Laurent Jacques, David Hammond and M. Jalal Fadili mention it in their paper about Dequantizing CS. The authors used the RIP_p to characterize the performance of the BPDQ decoder. RIP_p is there just the "sandwiching" of the Lp norm (with p>=2) of the random projection of sparse signals x, i.e. Phi x, between their L2 norm multiplied by the common (1+- delta)^1/2 factors, so that RIP_2 is the common RIP. Similarly, Rick Chartrand and Valentina Staneva, in "Restricted isometry properties and nonconvex compressive sensing" introduced also an RIP_p definition, with an existence proof for Gaussian matrices, but for 0 \lt p \le 1. (see this entry)
- UUP: Terry Tao has a set of properties listed here (this is an early version)
Other NP-Hard properties:
- KGG: Yin Zhang devised a necessary and sufficient property bypassing RIP in On theory of compressive sensing via ell-1-minimization: Simple derivations and extensions. (as mentioned here).
- GNS (Grassmann Null Space): Weiyu Xuand Babak Hassibi found another property in Compressed sensing over the Grassmann manifold: A unified analytical framework ( as mentioned here). They define a null-space Grassmannian angle-based analytical framework. Additional information can be found in Compressive Sensing over the Grassmann Manifold: a Unified Geometric Framework by Weiyu Xu, Babak Hassibi.where seems to provide a wider properties for not just sparse signals but also compressible ones.
- Compressed Sensing II, Ronald DeVore
- Ron DeVore's short course ('08)
- Compressed Sensing: Lecture I, Ronald DeVore
- Compressed Sensing: Lecture II, Ronald DeVore
- Compressed Sensing: Lecture III, Ronald DeVore
- Decoding in Compressed Sensing, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, Przemek Wojtaszczek
- Open problem on properties.
- Some stories about Lp-minimization, the Restricted Isometry Property, and excessive pessimism by Remi Gribonval
3.2 Non-Deterministic and Non Adaptive Measurement matrices / Encodings.
The first encoding matrices follow the RIP(2) property while the most recent sparse encoding matrices follow the RIP(1) property. In the following, the first three measurement matrices are pasted from Terry Tao site:RIP(2)
- Random Fourier Ensemble: The signal is a discrete function f on Z/NZ, and the measurements are the Fourier coefficients at a randomly selected set Omega of frequencies of size M ( A is an M x N matrix.)
- Gaussian ensemble: A is an M x N matrix (M x N Gaussian variables).
- Bernoulli ensemble: A is an M x N matrix (M x N Bernoulli variables).
- Fast and Efficient Compressive Sampling With Structurally Random Matrices ( Fast CS using SRM matlab code and presentation is here ). The implementation presented here generalizes most of the other measurement matrices of this paragraph.
- Toeplitz-structured compressed sensing matrices
- Toeplitz Block Matrices
- Noiselets : An implementation can be found here. Some useful comments on Noiselets by Laurent Jacques.
- Random Filters (from here)
- 1-Bit Compressive Sensing [*] Code for 1-bit compressive sensing by Laurent Jacques.
- Fast compressive imaging using scrambled block Hadamard ensemble [*],a simple implementation code is here.
- Random Convolution: as featured in Architectures for Compressive Sampling
- Kronecker Product of Random Matrices: as featured in Enhanced Compressive Sensing and More
- Count-Min matrices as featured in the Sparse Recovery Experiments wiki of Piotr Indyk and Radu Berinde
- LDPC encoding matrices as implemented in Compressive Sensing via Belief Propagation website and code (CSBP) by Dror Baron
- Sparse Measurement Matrix (implementation is mine, but real code available from the authors).
- Almost Euclidean subspaces of ell-1-N via expander codes by Venkatesan Guruswami, James Lee, and Alexander Razborov
- Explicit Constructions for Compressed Sensing of Sparse Signals by Piotr Indyk
- RIP_p is satisfied with very high probability on K sparse signals by the common Random Gaussian matrices (i.e. iid Gaussian matrix entries) of size MxN (M< N) as soon as M > O( (K log N/K)^(p/2) ) for p>= 2. (see Dequantizing Compressed Sensing with Non-Gaussian Constraintsby Laurent Jacques, David Hammond, and M. Jalal Fadili)
3.3 Non-Deterministic and Adaptive Measurement codes/matrices as mentioned here (as well as new ones).
- Bayesian Compressive Sensing
- Colored Random Projections for Compressed Sensing
- Compressed sensing image reconstruction via recursive spatially adaptive filtering,
- Adaptive compressed image sensing based on wavelet-trees
- Finding Needles in Noisy Haystacks
- Optimized Projections for Compressed Sensing
3.4 Deterministic Subsampling
3.4.1 Deterministic Subsampling with no prior information
3.4.1.1 Fourier <-> Diracs
- A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods, by Mark Iwen. The second paper by Mark Iwen and Craig Spencer entitled: Improved Bounds for a Deterministic Sublinear-Time Sparse Fourier Algorithm shows that the construction scales as k^2 polylog(n) found in the first paper is a tight bound.
3.4.1.2 Other pair of bases
- Annihilating filter-based decoding in the compressed sensing framework by Ali Hormati and Martin Vetterli
- A Frame Construction and A Universal Distortion Bound for Sparse Representations by Mehmet Akçakaya and Vahid Tarokh.
- A fast reconstruction algorithm for deterministic compressive sensing using second order Reed-Muller codes by Stephen Howard, Robert Calderbank, and Stephen Searle
- Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery by Lorne Applebaum, Stephen Howard, Stephen Searle and Robert Calderbank.
3.4.2 Deterministic Subsampling using additional prior information (besides sparsity)
Obviously, with the advent of Data Driven Dictionaries, I would expect we will have specific domain specific measurement matrix methods.
[ Please note: the most recent solvers are at the end of this list ]
[ For Authors: If your solver is not listed here, please contact me]
The following is a list of nonlinear solvers used to reconstruct the original signal from its compressed measurements / projections on an incoherent basis (as listed above in paragraph 2). Reconstruction codes span a wide series of techniques that include Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian, Iterative Thresholding, Proximal- . These links generally go to the Matlab toolbox implementing these techniques. Originally, this list was similar to the list found in the excellent Rice Repository.- l1-Magic by Emmanuel Candes and Justin Romberg
- SparseLab by David Donoho, Iddo Drori, Victoria Stodden, Yaakov Tsaig, Morteza Shahram with major contributions from: Michael Saunders, Joshua Sweetkind-Singer, Michael Elad, Lawrence Carin, Shihao Ji, Ya Xue, David Dunson. (version 2.0)
- GPSR by Mário Figueiredo, Robert D. Nowak, Stephen J. Wright. (version 6.0)
- ell-1 LS: Simple Matlab Solver for ell-1-Regularized Least Squares Problems by Kwangmoo Koh, Seung-Jean Kim, and Stephen Boyd (version beta)
- Sparsify by Thomas Blumensath (version 0.4)
- MPTK: Matching Pursuit Toolkit, with two packages based on MPTK by Sacha Krstulovic andRemi Gribonval (version 0.5.6)
- Bayesian Compressive Sensing(BCS): also includes TSW-CS (release Jan 19, 2009) implemented by an MCMC approach and VB-BCS implementing the basic BCS implemented via a variational Bayesian approach. Implemented by Shihao Ji, Ya Xue, David Dunson, Yuting Qi, Dehong Liu, Lihan He and Lawrence Carin. (Version 0.1)
- SPGL1: A solver for large scale sparse reconstruction by Michael Friedlander and Ewout van Der Berg (Version 1.6)
- FPC by Elaine Hale, Wotao Yin and Yin Zhang (Version 1.0)
- FPC_AS by Zaiwen Wen,Wotao Yin (Version 1.0)
- Chaining Pursuit by Joel Tropp
- Regularized OMP (ROMP) by Deanna Needell and Roman Vershynin
- TwIST by José Bioucas-Dias and Mário Figueiredo.
- SparSA by Stephen Wright, Robert Nowak, Mario Figueiredo. (version 2.0)
- SALSA and C-SALSA by Manya Afonso, Mário Figueiredo, José Bioucas-Dias,
- Bregman Iterative Algorithms for Constrained Compressed Sensing and Related Problemsby Wotao Yin, Stanley Osher, Donald Goldfarb and Jerome Darbon. It uses a modification ofFPC. You need to request it from the site (more specifically here)
- The Split Bregman Method for L1 Regularized Problems. The associated C++ code is here, a new Matlab/mex version is here. By Tom Goldstein and Stan Osher.
- Applying the Proximal Point Algorithm to a Non Negative Basis Pursuit Denoising model by François Malgouyres and Tieyong Zeng. The software is here . It features Matlab codes, scripts and results (30 MB). The scripts are compatible with the SPARCO toolbox.
- Sparsest solutions of underdetermined linear systems via -minimization for .The code is here. By Simon Foucart and Ming-Jun Lai
- Chambolle's algorithm for the resolution of compressed sensing with TV regularization byGabriel Peyre. This algorithm is included in the larger Toolbox Sparsity - A toolbox for sparse coding and sparse regularization
- Fast Bayesian Matching Pursuit (FBPM)by Justin Ziniel, Lee C. Potter, Philip Schniter. The code is located here.
- SAMP: Sparsity Adaptive Matching Pursuit for Practical Compressed Sensing : main page and attendant matlab code. By Thong T. Do, Lu Gan, Nam Nguyen, and Trac D. Tran.
- Smoothed L0 (SL0) Algorithm for Sparse Decomposition by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten (local version of sl0.m as provided G. Hosein Mohimani ). Now works with complex functions.
- Convex Iteration by Jon Dattorro. It is discussed in his book on Convex Optimization [1] and specifically in Chapter 4 page 306 to 314.The Matlab algorithm is available here.
- Gradient based method for cone programming with application to large-scale compressed sensing by Zhaosong Lu. The attendant Matlab source codes are here.
- Sparse Matching Pursuit (SMP) and Count-Min recovery algorithm as featured in the Sparse Recovery Experiments wiki of Piotr Indyk and Radu Berinde
- Alternating l_1 reconstruction algorithm, presented by Stephane Chretien in Tighter relaxations for the sparsest recovery problem. The page for the code is here.
- Non-negative Underdetermined Iteratively Reweighted Least Squares (NUIRLS)code by Paul O’Grady and Scott Rickard as described in Compressive Sampling of Non-Negative Signals.
- Compressive Sensing via Belief Propagation website and code (CSBP) by Dror Baron.
- Compressed Signal Reconstruction using the Correntropy Induced Metric by Sohan Seth,Jose C. Principe earlier. Sohan Seth also released the code implemented in the article.
- RecPF (Version 2.0, May 25th, 2009 ): A MATLAB code for image reconstruction from partial Fourier data that solves models with total-variation and l_1 regularization and an l_2-norm fidelity to fit the available incomplete Fourier data. Developed by Yin Zhang, Junfeng Yangand Wotao Yin.
- FTVd (version 3.0, January 6th, 2009): A MATLAB code for image deblurring and denoising that solves the model with total-variation regularization and l_2-norm fidelity. Developed by Yin Zhang, Junfeng Yang, Yilun Wang and Wotao Yin.
- A Coordinate Gradient Descent Method for l_1-regularized Convex Minimization by Sangwoon Yun and Kim-Chuan Toh Matlab code of the CGD method for L1-regularized linear least squares problem., Matlab code of the CGD method for L1-regularized logistic regression problem.
- Iteratively Reweighted Lp as implemented in Iteratively Re-weighted Least Squares Minimization for Sparse Recovery by Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, and C. Sinan Güntürk : irwls_main.m,it_min_l2_w_fast.m
- Two Step Reweighted l1 code from Improved Sparse Recovery Thresholds with Two-Step Reweighted $\ell_1$ Minimization by M. Amin Khajehnejad, Weiyu Xu, Salman Avestimehr,Babak Hassibi.
- Angshul Majumdar sent me a couple of Matlab files for solving lp analysis and synthesis prior algorithms. Both are first order algorithms unlike the IRLS. To a large extent they are based on the Majorization-Minimization approach, which I learnt from the Selesnick and Figueredo's paper: LpAnalysis.m , LpSynthesis.m,
- Lp_re, Iteratively Reweighted Lp - This is my implementation but a better one might be available from the authors. Implemented from Iteratively Reweighted Algorithms for Compressive Sensing by Rick Chartrand and Wotao Yin
- CSRec_SP.m. Subspace Pursuit original code as provided one of the author Wei Dai. (Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity [*])
- COSAMP: cosamp.m implemented by David Mary . A better version by Bob Sturm can be found here. [as described in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples by Deanna Needell and Joel Tropp. [*]
- S-POCS, Using Non Convex Sparse Signal Space Instead of the Convex l1 Ball Improves POCS Recovery introduced by Laurent Jacques
- Kalman Filter Compressed Sensing page by Namrata Vaswani, Wei Lu, Chenlu Qiu
- ExCoV page by Aleksandar Dogandžić and Kun Qiu.
- Angshul Majumdar has implemented some greedy algorithms for Group Sparsity. Following the works of Eldar et al in
[1] Y. C. Eldar and M. Mishali, "Robust Recovery of Signals From a Union of Subspaces ", arXiv.org 0807.4581, submitted to IEEE Trans. Inform. Theory, July 2008.
[2] Block-Sparsity: Coherence and Efficient Recovery by Yonina C. Eldar, Helmut Bolcskei to appear in ICASSP 2009.
Four algorithms - Block Orthogonal Matching Pursuit (BOMP), and three others, namely:
- GOMP - Group Orthogonal Matching Pursuit (similar to BOMP, except for decides the group by average correlation of each group instead of highest correlation)
- StGOMP - Stagewise GOMP; combining ideas of BOMP with StOMP
- ReGOMP - Regularized GOMP; combining ideas of ROMP and GOMP.
- All these codes are accessible here.
- GOMP - Group Orthogonal Matching Pursuit (similar to BOMP, except for decides the group by average correlation of each group instead of highest correlation)
- Angshul Majumdar also implemented the original OLS algorithm proposed in T. Blumensath, M. E. Davies; "On the Difference Between Orthogonal Matching Pursuit and Orthogonal Least Squares", manuscript 2007. Their algorithm is implemented in SparseOLS.zip. As well as two modified versions: ROLS - Regularized OLS (following ideas similar to ROMP), StOLS - Stagewise OLS (following ideas similar to StOMP)
- Fast Bayesian Compressive Sensing using Laplace Priors. by S. Derin Babacan, Rafael Molina, and Aggelos Katsaggelos. The source code can be found on the project webpage.
- Non-Iterative Computation of Sparsifiable Solutions to Underdetermined Kronecker Product Linear Systems of Equations by Andrew Yagle. The code is listed in the blog
- YALL1 (version beta-4, March26, 2009), Your ALgorithms for L1 by Yin Zhang
- NESTA : (version 1.0, June 3, 2009) A Fast and Accurate First-order Method for Sparse Recovery by Jérôme Bobin, Stephen Becker, and Emmanuel Candès.
- The Two Stage l1 Approach to the Compressed Sensing Problem by Stephane Chretien. The code is here (June 10th, 2009)
- Dirk Lorenz just sent me an e-mail:
I just wanted to let you know that I did what I should have done months or years ago: I put m-files for l^p and especially l^1-minimization on my page (see the following three m files)
- iter_thresh.m which implements the usual iterated soft thresholding for different values of p and also a sophisticated other algorithm which we called iterated hard thresholding (not to be confused with the algorithm proposed by Blumensath and Davies) . As presented in Iterated hard shrinkage for minimization problems with sparsity constraints by Kristian Bredies and Dirk A. Lorenz ).
- ssn.m which implements the semismooth Newton method for l^1-minimization and which is incredibly fast for the cases in which it converges (unfortunately this is not always the case... convergence is indeed only local) ( As presented in A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints by Roland Griesse and Dirk A. Lorenz).
- ppp_l1.m which implements the projection proximal point method for l^1 minimization presented in Numerical Functional Analysis and Optimization).
- Angshul Majumdar also sent me the following:
This is the code for solving problems of the form
The code is here: lmp_re_ls.m
min ||x||_m,p s.t. y = Ax
I have replaced the ||.||_2,p norm with the ||.||_m,p norm for the group sparsity problem.
It uses the same Re-weighted Least Squares method but with a different weighing method.
The jth index of the ith group is weighed by (||x(j)||_m)^(p-m) . |x(i,j)|^(m-2).
I derived it using the FOCUSS methodology. - The BPDQ Toolbox by David Hammond, Laurent Jacques and M. Jalal Fadili . The Basis Pursuit Dequantizers for recovery of sparse signals from quantized random measurements.
- The GraDes algorithm is now available here. It was featured in Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property byRahul Garg and Rohit Khandekar.
- proMP.m an implementation of Probabilistic Matching Pursuit for Compressive Sensing by Atul Divekar and Okan Ersoy (to be used with pbp.m )
- SLEP (Sparse Learning with Efficient Projections) package by Jun Liu and Jieping Ye provides a set of programs for sparse learning:
- ℓ1-regularized (constrained) sparse learning
- ℓ1/ℓq-regularized sparse learning (q>1)
- Fussed Lasso
- Sparse inverse covariance estimation
- Trace norm regularized learning
- Simon Foucart released the Hard Thresholding Pursuit (HPT) code featured in Hard thresholding pursuit: an algorithm for Compressive Sensing. The code is here.
- Model-based Compressive Sensing Toolbox: Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde (as featured in Model-Based Compressive Sensing)
- RFSS.m is the code implemented in: Elastic-Net Regularization: Error estimates and Active Set Methods.by Jin, Bangti, Lorenz, Dirk A., Schiffler, Stefan.
- Threshold-ISD by Yilung Wang and Wotao Yin
- PD-Sparse: Penalty Decomposition Methods for $l_0$-Norm Minimization by Zhaosong Lu,Yong Zhang.
- TFOCS (Templates for First-Order Conic Solvers ) by Stephen Becker, Emmanuel J. Candesand Michael Grant.
- Cyclic Matching Pursuit by Bob Sturm
- Compressed Sensing Image Reconstruction via Recursive Spatially Adaptive Filteringby Aram Danielyan, Karen Egiazarian, Alessandro Foi, Vladimir Katkovnik
- ALPS: accelerated IHT methods for sparse recovery by Volkan Cevher and Anastasios Kyrillidis
- Sparse Bayesian learning exploiting temporal correlation: T-SBL/T-MSBL (MMV algorithms) by Zhilin Zhang and Bhaskar D. Rao.
- tMFOCUSS [temporal M-FOCUSS, which exploits temporal correlation] by Zhilin Zhang and Bhaskar D. Rao.
- ARSBL [AR model based sparse Bayesian learning] by Zhilin Zhang and Bhaskar D. Rao.
- Binary Iterative Hard Thresholding (BIHT) Demo by Laurent Jacques, Jason Laska, Petros Boufounos, and Richard Baraniuk
- Approximate Message Passing (AMP) Code for Compressive Sensing by Ulugbek Kamilov
- The Generalized Approximate Message Passing (GAMP) MATLAB package by Sundeep Rangan, Alyson Fletcher, Vivek Goyal, Ulugbek Kamilov, Jason Parker, Phil Schniter.
- Compressive Imaging using Turbo AMP by Subhojit Som and Phil Schniter,
- Spectral compressive sensing toolbox by Marco Duarte and Richard Baraniuk
- Recursive Projected Compressive Sensing by Chenlu Qiu, Namrata Vaswani
- Distributed Basis pursuit ( BPinSN) : From Distributed Basis Pursuit by João F. C. Mota,João M. F. Xavier, Pedro M. Q. Aguiar, and Markus Püschel.
- RecPC by Wotao Yin , Simon Morgan (Los Alamos National Lab), Junfeng Yang, Yin Zhang
- CLASH: The algorithm is presented in "Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm", Technical Report, by Anastasios Kyrillidis and Volkan Cevher(corresponding ICML 2011 Structured Sparsity Workshop slides can be found here).
- TurboGAMP: joint channel-estimation/equalization/decoding of BICM-OFDM signals transmitted over channels with clustered-sparse impulse responses by Phil Schniter.
- ISAL1: Infeasible-Point Subgradient Algorithm for L1-Minimization by Dirk Lorenz, Marc Pfetsch and Andreas Tillmann (SPEAR Project page)
5. Hardware / Compressive Sensing Sensor implementations
Most of these entries are summarized in the Compressive Sensing Sensor / Hardware page. You should go to that page in order to see updated information. One should notice a difference between hardware that does not require modification but different operating procedures and totally new hardware. Hardware implementations vary according to what the hardware can already do. In the case of MRI, one is directly sampling in the Fourier Space and therefore it is directly amenable to subsampling in the Fourier world. Other instrumentations do not sample in the Fourier space and one is led to think of inventive schemes to do that. The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.