Pages

Monday, October 01, 2012

Pushing the Boundaries in Compressive Sensing

I did not pay too much attention to it when it came out but I should have. Andrea Montanari commented on the recent From Compressive Sensing to Machine Learning entry the following:

Hi Igor,
I wanted to point out that the paper http://arxiv.org/abs/1111.1041
also describe an algorithm that achieves rho =1 for block sparse signals and random sensing matrices (see sections 3.2, 3.3 and 6.5).
A
and indeed, more specifically, an AMP solver using the block James-Stein shrinkage produces the following phase transition (rho =1 is the line equivalent to \delta =\epsilon.)


In other words, while the results of Krzakala et al  gets to rho = 1 for sparse vectors, their approach requires dedicated sensing matrices. Here, rho =1 is reached with random sensing matrices but with a requirement of structured sparsity. The results of Zhilin while reaching rho = 1 also requires a block sparsity argument but show that qualitatively, some signals are still well represented (not exactly) when rho is above 1.

Which leads me to the following point:


  • Given that compressive sensing is based on an argument of sparsity 
  • Given that sparsity is all around us because most data seem to be sparse or compressible in a wavelet basis
  • Given that wavelet decomposition not only shows not just simple compressibility but structured compressibility of most datasets
  • Given that we now have some results showing better compressive sensing with a structured sparsity argument
Isn't it time we stopped talking about RIP ? the Donoho-Tanner phase transition ? L_1 recovery ?

(side question: has anybody implemented this AMP solver with the block James-Stein shrinkage?)


Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula precisely delineating the allowable degree of of undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar shrinkers (soft thresholding, positive soft thresholding and capping). This paper gives several examples including scalar shrinkers not derivable from convex optimization -- the firm shrinkage nonlinearity and the minimax} nonlinearity -- and also nonscalar denoisers -- block thresholding (both block soft and block James-Stein), monotone regression, and total variation minimization. Let the variables \epsilon = k/N and \delta = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x_0 according to y=Ax_0. Here A is an n\times N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve \delta = \delta(\epsilon) separating successful from unsuccessful reconstruction of x_0 by AMP is given by: \delta = M(\epsilon| Denoiser), where M(\epsilon| Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally-tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem.
Of related note: Universality in Polytope Phase Transitions and Message Passing Algorithms by Mohsen Bayati, Marc Lelarge and  Andrea Montanari. The abstract reads:
We consider a class of nonlinear mappings FA;N in R N indexed by symmetric random matrices A 2 R N N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are known as `approximate message passing' algorithms. We study the high-dimensional (large N) behavior of the iterates of F for polynomial functions F, and prove that it is universal, i.e. it depends only on the rst two moments of the entries of A, under a subgaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves {for a broad class of random projections{ a conjecture by David Donoho and Jared Tanner.

Thanks Andrea.


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment