Thursday, October 21, 2010

CS: Compressed sensing for wide-field radio interferometric imaging

Today we have a more in-depth look at interferometry using a sphere as a support and its connection to Compressive Sensing in Compressed sensing for wide-field radio interferometric imaging by Jason D. McEwen, Yves Wiaux. The abstract reads:

For the next generation of radio interferometric telescopes it is of paramount importance to incorporate wide field-of-view (WFOV) considerations in interferometric imaging, otherwise the fidelity of reconstructed images will suffer greatly. We extend compressed sensing techniques for interferometric imaging to a WFOV and recover images in the spherical coordinate space in which they naturally live, eliminating any distorting projection. The effectiveness of the spread spectrum phenomenon, highlighted recently by one of the authors, is enhanced when going to a WFOV, while sparsity is promoted by recovering images directly on the sphere. Both of these properties act to improve the quality of reconstructed interferometric images. We quantify the performance of compressed sensing reconstruction techniques through simulations, highlighting the superior reconstruction quality achieved by recovering interferometric images directly on the sphere rather than the plane.

I note from their conclusion:

...Extensions to realistic and continuous visibility coverage and their impact on compressed sensing based interferometric imaging are now of considerable importance. In general, compressed sensing addresses imaging by optimising both reconstruction and acquisition, while we have essentially focused on reconstruction only. The possibility of optimising the configuration of interferometers to enhance the spread spectrum phenomenon for compressed sensing reconstruction is an exciting avenue of research at the level of acquisition. In addition, direction dependent beam e ects may also provide an alternative source of the spread spectrum phenomenon...


Anonymous said...

Igor, Arxiv:1010.4138 is one that might have been missed by your filtering keywords, but it looks like an interesting result

abstract: Sparse coding algorithms are about finding a linear basis in which signals can be represented by a small number of active (non-zero) coefficients. Such coding has many applications in science and engineering and is believed to play an important role in neural information processing. However, due to the computational complexity of the task, only approximate solutions provide the required efficiency (in terms of time). As new results show, under particular conditions there exist efficient solutions by minimizing the magnitude of the coefficients (`$l_1$-norm') instead of minimizing the size of the active subset of features (`$l_0$-norm'). Straightforward neural implementation of these solutions is not likely, as they require \emph{a priori} knowledge of the number of active features. Furthermore, these methods utilize iterative re-evaluation of the reconstruction error, which in turn implies that final sparse forms (featuring `population sparseness') can only be reached through the formation of a series of non-sparse representations, which is in contrast with the overall sparse functioning of the neural systems (`lifetime sparseness'). In this article we present a novel algorithm which integrates our previous `$l_0$-norm' model on spike based probabilistic optimization for sparse coding with ideas coming from novel `$l_1$-norm' solutions.
The resulting algorithm allows neurally plausible implementation and does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We also demonstrate that the combined method significantly extends the domain where optimal solutions can be found by `$l_1$-norm' based algorithms.

Igor said...

Thanks but no, my filter caught it.

Nuit Blanche is not just about being an instantaneous reflection of arxiv :-)