Thursday, March 29, 2012

EM-GM-AMP: An Algorithm for Sparse Reconstruction (implementation)

"who has the patience to tweak yet another CS algorithm!" 

is one of the motivation for the new algorithm devised by Jeremy Vila and Phil Schniter as featured in Expectation-Maximization Gaussian-Mixture Approximate Message Passing, The abstract reads:
When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefficients can have a profound affect on recovery mean-squared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal—according to the learned distribution—using AMP. In particular, we model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the state-of-the-art performance of our approach on a range of signal classes.




From the web page:

EM-GM-AMP: An Algorithm for Sparse Reconstruction

The Expectation-Maximization Gaussian-Mixture Approximate Message Passing (EM-GM-AMP) is an algorithm designed to recover a signal x from (possibly) noisy measurements y = Ax +w. One particularly interesting case is when the measurements are undersampled (i.e. M <N). With sufficient signal sparsity and a well-conditioned mixing matrix, signal recovery is possible.
EM-GM-AMP assumes a sparsity promoting i.i.d. Gaussian-Mixture prior: p(x) = (1-lambda)delta(x)
         + lambda sum omega_l normal(x;theta_l,phi_l), and i.i.d. additive Gaussian noise prior: p(w) = \normal(w,0,psi). It then uses the Generalized Approximate Message Passing (GAMP) algorithm to evaluate the means (MMSE estimates) and variances of the posterior p(x|y). After running GM-AMP, we then find the Expectation-Maximization (EM) updates of the parameters lambda, omega, theta, phi and psi and repeat GM-AMP until convergence. (See publications below for details.)
The EM-GM-AMP MATLAB function attempts to recover a signal through measurements observed from the above linear model. In addition, EM-GM-AMP has some optional inputs to improve computation time/accuracy. Currently, EM-GM-AMP supports only real signals. Examples of the function's usage can be seen in the Examples section.
EM-GM-AMP is a more "flexible" version of EM-Bernoulli-Gaussian AMP. It allows for the GM prior to learn a broader class of signal priors, such as the Bernoulli-Rademacher signals. In this fashion, we aim to leverage the empirical pdf of an arbitrary signal to aid in its recovery.

The implementation is available here and is listing on the compressive sensing big picture page.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly