I asked Emmanuel Candes about the presentations, which for the first time, introduced some people to compressive sensing and its attendant phase transitions. Here is what he responded:
Igor, If you go here: http://statweb.stanford.edu/~candes/OldTalks.html, you will see three talks with phase transition diagrams. The first two have taken place on May 3 and May 28, 2004. The third on July 14, 2004. Thanks, Emmanuel
Thanks Emmanuel ! so it's been ten years, uh...
Phil Schniter just sent me the following:
Hi Igor,
I saw the "Cal-AMP" paper and wanted to mention that the approach discussed there (where GAMP is used with a known linear transform but output channels that depend on additional unknown parameters) has already been proposed and applied successfully to a number of publications over the last 3-4 years. Here are just a few of the related papers:
One paper is http://www2.ece.ohio-state.edu/~schniter/pdf/jstsp11_ofdm.pdf, which considers joint channel-estimation and symbol decoding in AWGN-corrupted OFDM. If you look at the factor graph in Figure 4 of that paper, you can see that the goal is to jointly estimate the finite-alphabet symbols s_n and the channel coefficients x_n. The x_n affect the outputs y_n through a dense linear transform (as is standard for AMP), but s_n acts as an unknown multiplicative gain on the n'th linear-transform output (as in Cal-AMP). This latter work goes beyond Cal-AMP by considered by considering the case of non-independent s_n. (See also the related papers and matlab code at http://www2.ece.ohio-state.edu/~schniter/turboGAMPdecoding/index.html)
Another paper is http://www2.ece.ohio-state.edu/~schniter/pdf/tsp14_impulse.pdf, which generalizes the above OFDM work from AWGN to AWGN-and-impulse-corruption, using Gaussian mixture models for the impulses. In other words, it uses an output channel of the form p(y_n|z_n,s_n,i_n), where vector z is a linear transform of unknown vector x, where s_n is an unknown multiplicative gain on the transform output, and where i_n is an additional additive interference beyond the AWGN. Moreover, the i_n can come in bursts across index n that is controlled by a Markov chain, and s_n can be probabilistically coupled through an error-control-code.
Another paper is http://arxiv.org/pdf/1401.0872v1.pdf, where the outputs are binary. Section IV.D of that work considers the case where an unknown subset of outputs are corrupted (i.e., their bits are flipped).
Thanks,
Phil
--
Phil Schniter
Professor
The Ohio State University
Department of Electrical and Computer Engineering
616 Dreese Lab, 2015 Neil Ave, Columbus, OH 43210
http://www.ece.osu.edu/~schniter
Thanks Phil !
Scott from GigaScience commented in this blog entry entitled '... There will be a "before" and "after" this paper ...'
Hi Igor,
really glad you enjoyed the open review process. I don't know if you've seen yet, but the Publons has just uploaded all of our reviews, providing further discoverability, transparency and credit for all the hard work you guys have done. This is your review here: https://publons.com/review/3878/. Thanks so much again for your help with this!
Thanks Scott !
On the SubReddit group, I was recently asked about whether an implementation could do well on images. Underneath this question is the reality that while AMP based reconstruction solvers do well in specific cases (gaussian measurement matrices), they just do not deliver well on most other accounts as summarized very elegantly in this presentation slide by Sundeep Rangan (excerpted from Approximate Message Passing: Can it Work? ).
I have no clue beforehand if an AMP solver will work with a specific measurement matrices or with a certain types of data (compressible as opposed to strictly sparse ones). Here is what I know: when implementations are made available here, aside from the authors, you might be the first person on Earth to try it out on your own case of interest. It could be a certain type of images or some uncertainty quantification examples. There are no ropes, no safety lines, you are on your own when it comes to discover the real capabiities of the algorithm that has been submitted by the author. If it doesn't work well on your dataset, I am sure the authors would want to hear from your exploration of that phase space. As a reminder, here is a recent sample of implementations that aim at improving the AMP algorithm's convergence with a wider variety of measurement matrices:
- The SwAMP Thing! Sparse Estimation with the Swept Approximated Message-Passing Algorithm -implementation -
- On the Convergence of Approximate Message Passing with Arbitrary Matrices
In other news: The Golosino seminar organized by Florent Krzakala and Lenka Zdeborova.had a slew of interesting presentations recently:
- Sparse dictionary learning in the presence of noise & outliers, RĂ©mi Gribonval
- Approximate Message Passing: Can it Work? Sundeep Rangan
- Measurement Matrix Design for Compressd Sensing: Sample Distortion Framework and Modulated Matrix Mike E Davies, Chunli Guo
- A brief introduction to the inverse Ising problem and some algorithms to solve it, Federico Ricci-Tersenghi
You can search through this blog with more than one label. In effect to search for blog entries with an implementation with a compressive sensing theme or an implementation dealing with matrix factorization, you need to query either:
and
respectively. You could add a third label -Got this tip from this blog- Here are some of the labels I use often:
- CS (1926)
- MF (336)
- implementation (277)
- space (144)
- calibration (101)
- CSHardware (83)
- AMP (81)
- CSjobs (73)
- ML (72)
- QuantCS (61)
- nuclear (59)
- technology (58)
- hyperspectral (56)
- phaseretrieval (55)
- nonlinearCS (51)
- CSVideo (43)
- grouptesting (43)
- BlindDeconvolution (40)
- SundayMorningInsight (39)
- 1bit (38)
- Meetups (36)
- SaturdayMorningVideos (36)
- publishing (36)
- RandNLA (35)
- synbio (33)
- NuitBlancheReview (21)
- CSmeeting (20)
- ImagingWithNature (17)
- Kaczmarz (17)
- CAI (15)
- These Technologies Do Not Exist (10)
- PredictingTheFuture (4)
- accidentalcamera (4)
- jionc (4)
- ReproducibleResearch (2)
Finally, my name is Igor Carron, not Nuit Blanche, not Igor Caron (or more recently here) nor Igor Carren. No biggies but I'd thought you'd want to know.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment