Pages

Friday, May 09, 2014

One-bit compressive sensing with norm estimation

Laurent Jacques mentioned it on on his twitter feed but I think it is bigger than even what he says (more on that at the end of this entry):




Compressive sensing typically involves the recovery of a sparse signal x from linear measurements ai.x, but recent research has demonstrated that recovery is also possible when the observed measurements are quantized to sign(ai.x)∈{±1}. Since such measurements give no information on the norm of x, recovery methods geared towards such 1-bit compressive sensing measurements typically assume that ∥x∥2=1. We show that if one allows more generally for quantized affine measurements of the form sign(ai.x+bi), and if such measurements are random, an appropriate choice of the affine shifts bi allows norm recovery to be easily incorporated into existing methods for 1-bit compressive sensing. Alternatively, if one is interested in norm estimation alone, we show that the fraction of measurements quantized to +1 (versus −1) can be used to estimate ∥x∥2 through a single evaluation of the inverse Gaussian error function, providing a computationally efficient method for norm estimation from 1-bit compressive measurements. Finally, all of our recovery guarantees are universal over sparse vectors, in the sense that with high probability, one set of measurements will successfully estimate all sparse vectors simultaneously.


Like Laurent says, it is a big deal for the 1bit compressive sensing and quantization crowd but I want to take this further. 1-bit compressive sensing is first and foremost a nonlinear encoding of measurements of a special kind, in fact it very much parallels the first layer of neural networks except that it doesn't use a sigmoid and does not use an affine function. The paper lifts that problem by adding an afine constant and thereby shows that more of the signal can be recovered than initially thought. In short, it provides a natural connection between the whole theoretical framework of compressive sensing and the vibrant and exciting development in neural networks, see the recent:
In other words, some theoretical justification is coming to the neural network community and by the same token, this might provide a way to design those in a more efficient manner. Woohoo !

The implementation is here.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment