Wednesday, June 25, 2014

Randomized Comments: Compressive Sensing Ten Year Anniversary, A Comment; A Review, Beyond gaussians with AMP, Searching Nuit Blanche with two labels, The Golosino seminar

I asked Emmanuel Candes about the presentations, which for the first time, introduced some people to compressive sensing and its attendant phase transitions. Here is what he responded:

If you go here:, you will see three talks with phase transition diagrams. The first two have taken place on May 3 and May 28, 2004. The third on July 14, 2004.
Thanks Emmanuel ! so it's been ten years, uh...

Phil Schniter just sent me the following:

Hi Igor,
I saw the "Cal-AMP" paper and wanted to mention that the approach discussed there (where GAMP is used with a known linear transform but output channels that depend on additional unknown parameters) has already been proposed and applied successfully to a number of publications over the last 3-4 years. Here are just a few of the related papers:
One paper is, which considers joint channel-estimation and symbol decoding in AWGN-corrupted OFDM. If you look at the factor graph in Figure 4 of that paper, you can see that the goal is to jointly estimate the finite-alphabet symbols s_n and the channel coefficients x_n. The x_n affect the outputs y_n through a dense linear transform (as is standard for AMP), but s_n acts as an unknown multiplicative gain on the n'th linear-transform output (as in Cal-AMP). This latter work goes beyond Cal-AMP by considered by considering the case of non-independent s_n. (See also the related papers and matlab code at
Another paper is, which generalizes the above OFDM work from AWGN to AWGN-and-impulse-corruption, using Gaussian mixture models for the impulses. In other words, it uses an output channel of the form p(y_n|z_n,s_n,i_n), where vector z is a linear transform of unknown vector x, where s_n is an unknown multiplicative gain on the transform output, and where i_n is an additional additive interference beyond the AWGN. Moreover, the i_n can come in bursts across index n that is controlled by a Markov chain, and s_n can be probabilistically coupled through an error-control-code.
Another paper is, where the outputs are binary. Section IV.D of that work considers the case where an unknown subset of outputs are corrupted (i.e., their bits are flipped).
Phil Schniter
The Ohio State University
Department of Electrical and Computer Engineering
616 Dreese Lab, 2015 Neil Ave, Columbus, OH 43210

Thanks Phil  !

Hi Igor,
really glad you enjoyed the open review process. I don't know if you've seen yet, but the Publons has just uploaded all of our reviews, providing further discoverability, transparency and credit for all the hard work you guys have done. This is your review here: Thanks so much again for your help with this!
Thanks Scott !

On the SubReddit group, I was recently asked about whether an implementation could do well on images. Underneath this question is the reality that while AMP based reconstruction solvers do well in specific cases (gaussian measurement matrices), they just do not deliver well on most other accounts as summarized very elegantly in this presentation slide by Sundeep Rangan (excerpted from Approximate Message Passing: Can it Work? ).

I have no clue beforehand if an AMP solver will work with a specific measurement matrices or with a certain types of data (compressible as opposed to strictly sparse ones). Here is what I know: when implementations are made available here, aside from the authors, you might be the first person on Earth to try it out on your own case of interest. It could be a certain type of images or some uncertainty quantification examples. There are no ropes, no safety lines, you are on your own when it comes to discover the real capabiities of the algorithm that has been submitted by the author. If it doesn't work well on your dataset, I am sure the authors would want to hear from your exploration of that phase space. As a reminder, here is a recent sample of implementations that aim at improving the AMP algorithm's convergence with a wider variety of measurement matrices: 

In other news: The Golosino seminar organized by Florent Krzakala and Lenka Zdeborova.had a slew of interesting presentations recently:

You can search through this blog with more than one label. In effect to search for blog entries with an implementation with a compressive sensing theme or an implementation dealing with matrix factorization, you need to query either:


respectively. You could add a third label -Got this tip from this blog- Here are some of the labels I use often:

Finally, my name is Igor Carron, not Nuit Blanche, not Igor Caron (or more recently here) nor Igor Carren. No biggies but I'd thought you'd want to know.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: