Friday, March 12, 2010

CS: Precise Undersampling Theorems (updated)

You do recall the Donoho-Tanner border crossing featured recently, right? In that entry, I mentioned the landmark paper by Jared Tanner and David Donoho entitled Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. While very interesting, the paper is long and I have witnessed somebody running for his life because of the page count. It looks like we now have a better document for engineers and scientists who are not really THAT interested in proofs involving random projections of high dimensional polytopes. This document was just released by David Donoho and Jared Tanner and is entitled Precise Undersampling Theorems.



The abstract of the paper reads:
Undersampling Theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest – provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one approach which precisely quantifies the true sparsity-undersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsity-undersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a k-sparse signal of length N from n measurements, provided n \gt 2k · log(N/n), for (k, n,N) large, k \lt\lt N.

A figure of interest beyond the first one (that includes not just Gaussian measurement matrices) is the following showing how the Donoho-Tanner boundary moves for small N.
I also like the fact that the RIP property or any other sufficient condition properties are clearly called into question for the business of determining how good a measurement matrix is in actuality: The paper clearly shows the need for computational experiments to probe this problem and while it is good to have a framework after which the mathematics can provide some bounds, it is also clear that we should do without most of the time. The current movement in most engineering papers is to have the first paragraph talk about the RIP or some such conditions and then play the lookee-here-some-plane-in-the-sky distraction game while casually mentioning that that argument does not apply for the measurement matrices at hand in the second paragraph, bummer!

Finally, of paramount importance is the open ending of the paper that cries out for some sort of benchmark capabilities.

RESEARCH CHALLENGES
We propose the following challenges

  • A. Characterize Universality classes of Gaussian Phase Transitions. We have shown that many ‘random’ matrix ensembles yield phase transitions matching those of Gaussian matrices. Characterize the precise universality class of such matrices.
  • B. Discover New Transitions for (LP) and (P1) Many but not all matrix ensembles yield phase transitions matching those of Gaussian matrices. Discover more examples which don’t, and which are also interesting matrix ensembles, either because the phase transition is better or because the matrix is explicit and deterministic.
  • C. Discover an analog of Example 1 (n = 2k + 1) for the signed case. If there is no such analog, characterize the best achievable performance and the matrices which achieve it.
  • D. Discover an analog of Example 1 for the 2D case. If there is no such analog, characterize the best achievable performance and the matrices which achieve it.
  • E. Derive phase transitions of new algorithms. Discover formal theory which precisely locates phase transitions that have been observed for other algorithms
  • F. Derive phase transitions for Accurate Recovery in Noise. When noise is present we can no longer expect exact reconstruction, but we can expect stable recovery (error at most proportional to noise level). phase transitions have been observed empirically for such properties. Formal results would give undersampling theorems in noise. Derive such results.
In particular for E, I am very curious on how algorithms like the reweighted Lp and other l_0 codes like SL0 fare in elevating these phase transitions.


No comments:

Printfriendly