The four examples are described below:I just wrote some Matlab code with basic CS examples, using L1-Magic and CVX. I also wrote a document describing the code. They are both available at
http://control.mines.edu/mediawiki/index.php/CS_sample_code
I think they might be useful for people new to CS, so it will be great if you can mention this in your blog....
Matlab Files Description
- ex1.m: Sensing matrix phi is random, the representation basis Psi is the canonical basis (the signal is sparse in the time domain). The recovery is done using L1-magic.
- ex2.m: Same signal and measurement as ex1.m, but the recovery is done using cvx.
- ex3.m: Same signal as ex2, but the measurement is done using the Fourier basis.
- ex4.m: Sensing matrix phi is random, the representation basis Phi is the Fourier basis (the signal is sparse in the frequency domain). The recovery is done using CVX.
Thank you Alejandro !
It looks we have a contender to SPGL1 in the shape of C-SALSA as witnessed in this paper: A fast algorithm for the constrained formulation of compressive image reconstruction and other linear inverse problems by Manya V. Afonso, J. Bioucas-Dias, and Mario Figueiredo. The abstract reads:
Ill-posed linear inverse problems (ILIP), such as restoration and reconstruction, are a core topic of signal/image processing. A standard approach to deal with ILIP uses a constrained optimization problem, where a regularization function is minimized under the constraint that the solution explains the observations sufficiently well. The regularizer and constraint are usually convex; however, several particular features of these problems (huge dimensionality, non-smoothness) preclude the use of off-the-shelf optimization tools and have stimulated much research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the category of augmented Lagrangian methods, can be used to deal with a variety of imaging ILIP, including deconvolution and reconstruction from compressive observations (such as MRI). Experiments testify for the effectiveness of the proposed method.
I also found the following on the interwebs:
Distributed Sensing of Signals under a Sparse Filtering Model, by Ali Hormati , Olivier Roy , Yue M. Lu and Martin Vetterli. The abstract reads:
Dictionary learning and sparse coding for unsupervised clustering by Pablo Sprechmann and Guillermo Sapiro. The abstract reads:We consider the task of recovering correlated vectors at a central decoder based on fixed linear measurements obtained by distributed sensors. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in the case of almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We provide achievability bounds on the number of samples needed for both scenarios. The bounds show that only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition, we propose an efficient and robust distributed sensing and reconstruction algorithm based on annihilating filters.
A clustering framework within the sparse modeling and dictionary learning setting is introduced in this work. Instead of searching for the set of centroid that best fit the data, as in k-means type of approaches that model the data as distributions around discrete points, we optimize for a set of dictionaries, one for each cluster, for which the signals are best reconstructed in a sparse coding manner. Thereby, we are modeling the data as the of union of learned low dimensional subspaces, and data points associated to subspaces spanned by just a few atoms of the same learned dictionary are clustered together. Using learned dictionaries makes this method robust and well suited to handle large datasets. The proposed clustering algorithm uses a novel measurement for the quality of the sparse representation, inspired by the robustness of the l_1 regularization term in sparse coding. We first illustrate this measurement with examples on standard image and speech datasets in the supervised classification setting, showing with a simple approach its discriminative power and obtaining results comparable to the state-of-the-art. We then conclude with experiments for fully unsupervised clustering on extended standard datasets and texture images, obtaining excellent performance.
I also found some news from the Rice Compressive Sensing repository:
Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing by Sundeep Rangan, Alyson K. Fletcher, Vivek K Goyal. The abstract reads:
The replica method is a non-rigorous but widely-accepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to non-Gaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of an n-dimensional vector decouples as n scalar MAP estimators. The result is a counterpart to Guo and Verdu's replica analysis of minimum mean-squared error estimation.
The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation it reduces to a hard threshold. Among other benefits, the replica method provides a computationally-tractable method for exactly computing various performance metrics including mean-squared error and sparsity pattern recovery probability.
and a software code: SLEP: A Sparse Learning Package