Pages

Thursday, December 20, 2007

Compressed Sensing: Compressive sampling vs conventional imaging, the Birth of the Everlasting Dataset

I just found this presentation entitled Compressive Sensing: A New Framework for Imaging by Richard Baraniuk, Justin Romberg and Robert Nowak which seems to be a tutorial presentation for ICIP 2006. Much of this tutorial appear in previous presentations except for the presentation part starting at slide 93 to slide 120, I have found this part of the presentation very refreshing as I am beginning to see the connection between results of spectral theory of Random Matrices and Compressed Sensing. At some point the subject will merge with this one but I am not sure how for the time being.

Another aspect of the image reconstruction, which I had not seen before, is also explicitly presented. Once you have retrieved random measurements/projections, Compressed Sensing techniques enable you to reconstruct an image with different bases. If, in 20 years, we have better bases, the same measurements can be used and will reproduce a better image than the one we can currently reconstruct. Case in point, if we had done compressed sensing in the mid-1990's, we would have reconstructed images with edges using wavelets bases. Then curvelets came up , using the same measurements we would have been able to get a clearer imageright when Emmanuel Candes discovered the basis!

The underlying reason being that curvelets provide better approximation properties to edges, hence providing a sparse representation of images (most natural images are made of edges).

The presentation (excerpted from reference [1]) then goes on to show a comparison between recovery using Compressed Sensing with different bases and a normal pixel sampling.


The wedgelets are coming from this implementation. The conclusions are nice summary:


"Compressive sampling techniques can yield accurate reconstructions even when the signal dimension greatly exceeds the number of samples, and even when the samples themselves are contaminated with significant levels of noise. CS can be advantageous in noisy imaging problems if the underlying image is highly compressible or if the SNR is sufficiently large"
I can see how there is going to be similar activity in the field of data mining where the underlying data dimension is larger than 2 and for which there is not currently a reasonable basis to decompose datasets on. Images are 10 MB large at most and get reduced to about 1 MB jpeg (this gives an idea of the compression using the DCT). Compressed Sensing with current bases yield about 4 MB of random measurements a reduction of about 60 %. With this in mind, one wonders how many random measurements would be needed to eventually provide a good reconstruction on different types of datasets. Does taking 47 random measurements of Shakespeare's Hamlet provide enough information to reconstruct it ? Maybe not today, if not, when ?

Reference: [1] Jarvis Haupt and Robert Nowak, Compressive sampling vs conventional imaging

No comments:

Post a Comment