Extraordinary claims require extraordinary proofs which really is the reason why this sorts of discussion is important. Similarly, sometimes, you are so blinded to some sorts of a truth and are faced with something so different that you can misread entirely what is being said. If you read this morning's entry, you might get a feel that I am little ambivalent about the true interesting nature of a paper entitled Statistical physics-based reconstruction in compressed sensing by Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová. Let's put this in perspective, our current understanding so far is that the universal phase transition observed by Donoho and Tanner seems to be seen with all the solvers featured here, that there are many ensembles for which it fits (not just Gaussian, I remember my jaw dropping when Jared Tanner showed it worked for the ensembles of Piotr Indyk, Radu Berinde et al) and that the only way to break it is to now consider structured sparsity as shown by Phil Schniter at the beginning of the week. In most people's mind, the L_1 solvers are really a good proxy to the L_0 solvers since even greedy solvers (the closest we can find to L_0 solvers) seem to provide similar results. Then there are results like the ones of Shrinivas Kudekar and Henry Pfister. ( Figure 5 of The Effect of Spatial Coupling on Compressive Sensing) that look like some sort of improvement (but not a large one). In all, a slight improvement over that phase transition could, maybe, be attributed to a slightly different solver, or ensemble (measurement matrices). So this morning I made the point that given what I understood about the graphs displayed in the article, it may be at best a small improvement over the Donoho-Tanner phase transition known to hold for not only Gaussian but other types of matrices and for different kinds of solvers, including greedy algorithms and SL0 (that simulate some sorts of L_0 approach). At best is really an overstatement but I was intrigued mostly because of the use of an AMP solver, so I fired off an inquisitive e-mail on the subject to the corresponding author:
Dear Dr. Krzakala,
... I briefly read your recent paper on arxiv with regards to your statistical physics based reconstruction capability and I am wondering if your current results are within the known boundary of what we know of the phase transition found by Donoho and Tanner or if it is an improvement on it. I provided an explanation of what I meant in today's entry (http://nuit-blanche.blogspot.com/2011/09/this-week-in-compressive-sensing.html). If this is an improvement, I'd love to hear about it. If it is is not an improvement, one wonders if some of the deeper geometrical findings featured by the Donoho-Tanner phase transition have a bearing on phase transition on real physical systems.Best regards,Igor.
The authors responded quickly with:
Dear Igor,
Thanks for writing about our work in your blog.Please notice, however, that our axes in the figure you show are not the same as those of Donoho and Tanner. For a signal with N components, we define \rho N as the number of non-zeros in the signal, and \alpha N as the number of measurements. In our notation Donoho and Tanner's parameters are rho_DT = rho/alpha and delta_DT = alpha. We are attaching our figure plotted in Donoho and Tanner's way. Our green line is then exactly the DT's red line (since we do not put any restriction of the signal elements), the rest is how much we can improve on it with our method. Asymptotically (N\to \infty) our method can reconstruct exactly till the red line alpha=rho, which is the absolute limit for exact reconstruction (with exhaustive search algorithms).So we indeed improve a lot over the standard L1 reconstruction!We will be of course very happy to discuss/explain/clarify details if you are interested.With best regardsFlorent, Marc, Francois, Yifan, and Lenka
The reason I messed up reading the variables is because I was probably not expecting something that stunning.
Thank you for Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová for their rapid feedback.
Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from
Definitely not sure what to make of this. It's a very big result if it holds up.
ReplyDeleteBut wouldn't this mean P=NP?
ReplyDeleteBob,
ReplyDeleteGood point but I don' t think so. Somewhere in that paper there is an additional constraint. I need the authors to identify it more plainly.
There seems to be a little bit of confusion here...
ReplyDeleteWhen the signals are exactly sparse and there is no noise, the Donoho-Tanner phase transition characterizes the limits of L1 minimization. As it happens that L1 minimization outperforms other reconstruction algorithms (such as SP, CoSaMP, IHT), this phase transition also seems to apply to these algorithms. However, it will not apply to some other algorithms that outperform L1 minimization, like iteratively reweighted L1.
We also do know what the limit of compressive sensing is in this noise-free setting. It is 2*s measurements, where s is the number of non-zero coefficients in the signal (note this is independent of signal dimension). Furthermore, there are polynomial time algorithms coming from coding theory that achieve this limit (although they are not stable in the presence of noise). This limit can be loosened to s+1 measurements if reconstruction with probability 1 is acceptable (hence the red line in their phase-transition figures).
From what I can tell after skimming through this paper, the authors also follow a coding theory approach: They design their own measurement matrices and reconstruction algorithms for these matrices , and outperform L1 minimization. It seems all their results are on the exact sparse signals with no measurement noise, and they mention that the stability of the algorithm will be studied in further work.
I agree with anonymous, but here is where I am a little bit more optimistic:
ReplyDeleteBesides the work of Pfister et al, this is the first time i see an actual phase transition much different than the DT transition. The paper states that they are indeed aware of that result and tweeked it to get better performance by avoiding some "dynamic phase transition"
I look forward to somebody's implementation ofIRL1 doing noticeably better than the DT transition, Bob Sturm's computations show over and over that most of these algorithms do not do much better than DT. This is why this algo is qualitatively different from most other attempts I have seen so far.
Finally, while a result with the issue of noise will be much welcomed, we have certainly seen folks coming up with well designed measurement matrices and attendant solvers, so I don't think this particularity should be much of a concern for the time being. On the contrary, their measurement matrices seem to be pretty sparse, i.e. It reminds me of the sparse measurement matrices of Indyk and Berinde (and their attendant specifically designed reconstruction solvers).
The DT transition really show how far most known polynomial algorithms fare. This new algo is quadratic/polynomial and pushes the boundary up. The NP region seems to now be above k = m.
Cheers,
Igor.
Igor.