Following up on his recent ArXiv preprint (An Empirical-Bayes Approach to Recovering Linearly Constrained Non-Negative Sparse Signals) , Phil Schniter just sent me the following:

Hi Igor,

My grad student Jeremy Vila created a nice website that summarizes our non-negative (NN) AMP work (which we call EM-NN-AMP) and presents concise Matlab examples of how to use it. Hopefully your readers will find it useful. The link is

In particular, the examples presented there are:1) Recovering a non-negative signal in noise. This also shows how to toggle between our three proposed algorithms: NNLS-AMP, EM-NNL-AMP, and EM-NNGM-AMP.2) Recovering linearly constrained non-negative sparse signals in noise.3) Recovering a NN satellite image from compressive linear (fast Hadamard) measurements.4) Robustly recovering signals in the presence of outliers.Cheers,Phil

--Phil Schniter

Thanks Phil and Jeremy !

From the page:

*Advantages of EM-NN-AMP*We highlight some of EM-NN-AMP's advantages below:

- EM-NNL-AMP removes the need to hand tune for the NN LASSO problem.
- State-of-the-art noiseless phase transitions of simplex-obeying sparse Dirichlet signals using EM-NNGM-AMP.
- State-of-the-art noisy recovery of a NN image for all undersampling ratios using EM-NNGM-AMP.
- Excellent performance on the portfolio optimization problem using the return-adjusted Markowitz mean-variance framework. When coupled with the additive white Laplacian noise model, performance improves further.
- Very good complexity scaling with problem dimensions and can leverage fast operators such as FFTs.

Three tiny notes:

- the reason the phase transitions are even better than the traditional Donoho-Tanner phase transition is probably embedded in the additional structure given to the signal (positivity) and the nice structure of AMP. We know that additional structure in the signal such as structured sparsity will push that phase transition further.
- We have known for quite some time that some solvers enforced positivity by default but never knew why. This implementation might shed some light there but mathematically there is something deeper at play. And really the answer to that question is yes: yes we expect the mathematicians to clear the waters there. If we were to draw a parallel with previous discoveries, one could sat that while NMF is synonymous with matrix factorization in the research world (and even though the matrix factorization jungle is much richer) the standard algorithm for NMF was uncovered in 2000 but remained out of sounder theoretical basis until recently. That did not stop scores of applied work to be using it. Let us hope that EM-NN-AMP a similar push to a wider audience.
- EM-NN-AMP will be added shortly to the CS reconstruction algorithm section of the Big Picture in Compressive Sensing.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment