Pages

Wednesday, July 22, 2009

CS: Message passing algo for CS, Finite Rate of Innovation, General Deviants, Muthu's presentations, Herschel news ?


Today,we are starting with a new Belief Propagation nonlinear solver. It is presented in Message passing algorithms for compressed sensing by David Donoho, Arian Maleki, and Andrea Montanari. The abstract reads:

Compressed sensing aims to undersample certain high dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization – which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsityundersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism.

We describe a property satisfied by a class of nonlinear systems of equations that are of the form $\F(\Omega)\X=\Y$. Here $\F(\Omega)$ is a matrix that depends on an unknown $K$-dimensional vector $\Omega$, $\X$ is an unknown $K$-dimensional vector and $\Y$ is a vector of $N$ $\ge K$) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as ``Finite Rate of Innovation'' signal reconstruction.

We show how this property allows to solve explicitly for the unknowns $\Omega$
and $\X$ by a direct, non-iterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful.

In a previous entry, I introduced General Deviants: An Analysis of Perturbations in Compressed Sensing by Matthew A. Herman and Thomas Strohmer. This paper has gone through a major revision.

Muthu Muthukrishnan just released his presentation slides.

Jean-Luc Starck will make a presentation entitled Compressed Sensing in Astronomy at Saclay on October 20, 2009 at 11:00 am (Bât 141, salle André Berthelot). The summary of the talk is:
We briefly review the concept of Compressed Sensing (CS), the new sampling theory, which is certainly one of the most important discovery in data processing during the last ten years. Indeed, CS provides an alternative to the well-known Shannon sampling theory. Then we will show how some problems in astronomy such interferometric image deconvolution or gammay ray image reconstruction can be handled differently using CS. Finally, we will show how CS could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments onboard the Herschel spacecraft which will launched in April 2009. We show that CS enables to recover data with a spatial resolution enhanced up to 30 per cent with similar sensitivity compared to the averaging technique proposed by ESA.
Shall we see the first results of PACS Herschel Compressive Sensing test ? We'll see.


Image Credit: NASA/JPL/Space Science Institute, Saturn as seen by Cassini on July 20, 2009.

No comments:

Post a Comment