I really want to apologize to the readers for having called the "k=m" finding a stunning development ...., instead David Donoho, Adel Javanmard and Andrea Montanari called the result "quite striking.". More seriously, this work provides some insight about the noise issue and much more. Enjoy!

Information-Theoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing by David Donoho, Adel Javanmard and Andrea Montanari. The abstract reads:

From the presentation of the paper:We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. \cite{KrzakalaEtAl}, message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates.We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate $\delta$ exceeds the (upper) R\'enyi information dimension of the signal, $\uRenyi(p_X)$. More precisely, for a sequence of signals of diverging dimension $n$ whose empirical distribution converges to $p_X$, reconstruction is with high probability successful from $\uRenyi(p_X)\, n+o(n)$ measurements taken according to a band diagonal matrix.For sparse signals, i.e. sequences of dimension $n$ and $k(n)$ non-zero entries, this implies reconstruction from $k(n)+o(n)$ measurements. For `discrete' signals, i.e. signals whose coordinates take a fixed finite set of values, this implies reconstruction from $o(n)$ measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal $p_X$.

"...In this paper, we fill the gap between the above works, and present the following contributions:.Construction. We describe a construction for spatially coupled sensing matrices A that is somewhat broader than the one of [KMS+11] and give precise prescriptions for the asymptotics of various parameters. We also use a somewhat different reconstruction algorithm from the one in [KMS+11], by building on the approximate message passing (AMP) approach of [DMM09, DMM10]. AMP algorithms have the advantage of smaller memory complexity with respect to standard message passing, and of smaller computational complexity whenever fast multiplication procedures are available for A.Rigorous proof of convergence. Our main contribution is a rigorous proof that the above approach indeed achieves the information-theoretic limits set out by Wu and Verd u [WV10].Indeed, we prove that, for sequences of spatially coupled sensing matrices fA(n)gn2N, A(n) 2Rm(n)n with asymptotic undersampling rate = limn!1 m(n)=n, AMP reconstruction is with high probability successful in recovering the signal x, provided > d(pX).Robustness to noise. We prove that the present approach is robust 3 to noise in the following sense. For any signal distribution pX and undersampling rate , there exists a constant C such thatthe output xb(y) of the reconstruction algorithm achieves a mean square error per coordinate n1Efkxb(y) xk22g C 2. This result holds under the noisy measurement model (2) for a broad class of noise models for w, including i.i.d. noise coordinates wi with Efw2ig = 2 < 1.Non-random signals. Our proof does not apply uniquely to random signals x with i.i.d. components, but indeed to more general sequences of signals fx(n)gn2N, x(n) 2 Rn indexed by their dimension n. The conditions required are: (1) that the empirical distribution of the coordinates of x(n) converges (weakly) to pX; and (2) that kx(n)k22 converges to the second moment of the asymptotic law pX. Interestingly, the present framework changes the notion of `structure' that is relevant for reconstructing the signal x. Indeed, the focus is shifted from the sparsity of x to the information dimension d(pX)...."

Beyond the results [1] of Florent Krzakala, Marc Mézard, François Sausset,Yifan Sun, Lenka Zdeborová., one should also probably read again some of the work of Yihong Wu and Sergio Verdu featured here before:

- Rényi Information Dimension: Fundamental Limits of Almost Lossless Analog Compression
- Fundamental Limits of Almost Lossless Analog Compression (Slides)
- MMSE Dimension and Noisy Compressed Sensing
- MMSE Dimension
- and Yihong's commentary.

Reference: [1] Statistical physics-based reconstruction in compressed sensing

Credit photo: NASA, Opportunity :: Navigation Camera :: Sol 2792

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 2 comments:

Wait, is that even possible? That the number of necessary measurements is roughly the sparsity?

Matt

yes, that is what the phase diagram shows. In short, the magic matrices of Krzakala et al greedily gain insight in the signal as it is being reconstructed. If you don't want to use the magic matrices, then you have to rely on an argument that the signal is block sparse. So in effect; in all cases; you are adding prior information on the signal. it is not just sparse anymore, but block sparse. That alone decreases the number of measurements.

Cheers,

Igor.

Post a Comment