Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged : compressed sensing. This theory is a new sampling framework that provides an alternative to the well-known Shannon sampling theory. In this paper we investigate how compressed sensing (CS) can provide new insights into astronomical data compression and more generally how it paves the way for new conceptions in astronomical remote sensing. We first give a brief overview of the compressed sensing theory which provides very simple coding process with low computational cost, thus favoring its use for real-time applications often found on board space mission.We introduce a practical and effective recovery algorithm for decoding compressed data. In astronomy, physical prior information is often crucial for devising effective signal processing methods. We particularly point out that a CS-based compression scheme is flexible enough to account for such information. In this context, compressed sensing is a new framework in which data acquisition and data processing are merged. We show also that CS provides a new fantastic way to handle multiple observations of the same field view, allowing us to recover information at very low signal-to-noise ratio, which is impossible with standard compression methods. This CS data fusion concept could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments on board the Herschel spacecraft which will launched in 2008.
I think this is one of the first implementation that will have a direct bearing on the actual implementation of a full sized hardware (and a space hardware at that). As the authors point out:
The Herschel satellite 1, which will be launched in 2008, is faced with a similar problem. Indeed the photometer data need to be compressed by a factor of 16 to be transferred. The yet implemented lossless compression scheme (based on entropy coding) yield a compression rate of 2.5. ESA 2 is in need of a compression ratio of 6. As the CPU load has to be extremely small, conventional compression methods cannot be used.
But more interestingly, they connect their approach to coded aperture as mentioned before here, here and here:
Interestingly, such kind of measurement paradigm is far from being science-fiction. Indeed, in the field of'-ray imaging, the so-called coded-masks3 (see [39] and references therein) are used since the sixties and are currently operating in the ESA/Integral space mission4. In gamma-ray (high energy) imaging, coded masks are used as aperture masks scattering the incoming gamma photons. More formally, the couple (coded aperture mask and detector field) is equivalent to selecting some projections in the Fourier space. In coded aperture imaging, the way the mask is designed is likely to simulate incoherent projections. Furthermore, gamma-ray data are often made of point sources that are almost sparse in the pixel domain. Fourier measurements then provide near optimal incoherent projections. The first application of compressed sensing then dates back to the sixties ! In the compressed sensing community, the coded mask concept has inspired the design of the celebrated “compressed sensing camera” [40] that provide effective image compression with a single pixel.
In coded aperture imaging, the decoding step is often performed by iterative techniques based on maximum entropy [41]. Applying a sparsity-based recovery technique as advocated by the compressed sensing theory would probably provide enhancements.
"More formally, the couple (coded aperture mask and detector field) is equivalent to selecting some projections in the Fourier space.."
And then they use the noiselet transform. It should be in the public domain or I am going to have to implement it as a Monday Morning Algorithm one of these days. Any takers ? The interesting part of the paper is the development of a new Iterative Thresholding reconstruction algorithm called ProxIT. The algorithm seems to be able to take on large images.
Probing the Pareto frontier for basis pursuit solutions by Ewout van den Berg and Michael P. Friedlander. The abstract reads:
The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN.A presentation on the subject was mentioned before.
We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm
is suitable for problems that are large scale and for those that are in the complex domain. At each
iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the per-sample signal-to-noise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNR. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and non-stochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.
Julien Mairal, Guillermo Sapiro, and Michael Elad, Multiscale sparse image representation with learned dictionaries. The abstract reads:
This paper introduces a new framework for learning multiscale sparse representations of natural images with overcomplete dictionaries. Our work extends the K-SVD algorithm [1], which learns sparse single-scale dictionaries for natural images. Recent work has shown that the K-SVD can lead to state-of-the-art image restoration results [2, 3]. We show that these are further improved with a multiscale approach, based on a Quadtree decomposition. Our framework provides an alternative to multiscale pre-defined dictionaries such as wavelets, curvelets, and contourlets, with dictionaries optimized for the data and application instead of pre-modelled ones.
Credit photo: ESA, Mirror of the Herschel Spacecraft.
No comments:
Post a Comment