[2012 Update: From Compressive Sensing to Machine Learning ]
Compressive Sensing (CS) started back in 2004 when Candes, Donoho, Tao and Romberg discovered that one could find in polynomial time the sparsest solution to certain types of underdetermined systems of linear equations. These systems have been mapped to all sorts of sensing devices starting with MRI and many others. The details have been told numerous times in the CS literature, so let me extract two sentences:
Compressive Sensing (CS) started back in 2004 when Candes, Donoho, Tao and Romberg discovered that one could find in polynomial time the sparsest solution to certain types of underdetermined systems of linear equations. These systems have been mapped to all sorts of sensing devices starting with MRI and many others. The details have been told numerous times in the CS literature, so let me extract two sentences:
The work by Donoho and Candès et. al. [1], [2] demonstrated that CS reconstruction is a polynomial time problem – albeit under the constraint that more than 2K measurements are used.From (1)
It has been shown that we cannot hope to perform CS reconstruction of a k-sparse signal using fewer than m = k + 1 measurements [3, 7]. However, the only approach for CS reconstruction using m = k + 1 is via L_0 minimization, which is known to be NP-complete and therefore impractical [8]
The paper by Florent Krzakala, Marc Mézard, François Sausset,Yifan Sun and Lenka Zdeborová (3) featured on Friday shows that with some probability close to 1, a polynomial time (quadratic) algorithm can perform a compressive sensing reconstruction with m = k+1.
In A stunning development in breaking the Donoho-Tanner phase transition ?, I provided different graphs that give an idea on how this algorithm provides an advantage over a previous limit defined by polynomial algorithms (L_1 minimization) which themselves started the whole compressive sensing field / research area.
In Mirror, mirror, who's the sparsest of them all ?, I drew the different known boundaries between P and NP over time.
In Build it and people will come, some views are expressed on whether this algorithm may be useful as:
- its robustness still has to be tested against noise (A: other studies suggest that some of these types of algorithm are in fact robust in this new region)
- it imposes a measurement process that might not be a good fit to hardware (A: it's really up to the hardware/sensor designer to decide and it should not be an algorithm development's concern).
References:
(1) Measurements vs. Bits: Compressed Sensing meets Information Theory, Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk.
(2) Sublinear Compressive Sensing Reconstruction via Belief Propagation Decoding, Hoa V. Pham, Wei Dai, Olgica Milenkovic.
(3) Statistical physics-based reconstruction in compressed sensing, Florent Krzakala, Marc Mézard, François Sausset,Yifan Sun and Lenka Zdeborová, http://arxiv.org/abs/1109.4424
2 comments:
so P=NP?
i think some computer scientists will find this interesting.
NOOOOOOOOOOOO. Can you please read ? It means a new P algorithm has shown that a piece of what was once thought to be NP is now P. A large territory still remains in NP. Get it ?
Post a Comment