Friday, October 18, 2013

Direct deconvolution of radio synthesis images using L1 minimisation - implementation -

Stephen Hardy just sent me the following:

Hi Igor,

I recently published a paper in Astronomy and Astrophysics describing an algorithm (called SL1M) that applies L1 minimisation to allow direct deconvolution of radio synthesis images. The new part here is the on-demand calculation of the projection matrix from the image space onto the observed interferometric visibilities, combined within an iterative thresholding algorithm. This allows arbitrary image pixel placement, non coplanar base lines and direction dependent gains to be modelled. The calculations can be done exactly or with several different approximations. Combining this technique with estimation of the regularisation parameter has the potential to produce a parameter free radio deconvolution algorithm.

Implementation (tuned to EC2 GPU instances): https://github.com/StephenJHardy/SL1M

Hope this interest to you - it certainly is to me!

cheers,
Stephen Hardy



Thanks Stephen ! As an aside, I wonder if a combination of earth rotation and antenna manipulation could provide the type of sampling required by the recent infinite dimensional CS business. Anyway, here is the paper. Direct deconvolution of radio synthesis images using L1 minimisation by Stephen J. Hardy

We introduce an algorithm for the deconvolution of radio synthesis images that accounts for the non-coplanar-baseline effect, allows multiscale reconstruction onto arbitrarily positioned pixel grids, and allows the antenna elements to have direcitonal dependent gains.
Methods. Using numerical L1-minimisation techniques established in the application of compressive sensing to radio astronomy, we directly solve the deconvolution equation using graphics processing unit (GPU) hardware. This approach relies on an analytic expression for the contribution of a pixel in the image to the observed visibilities, and the well-known expression for Dirac delta function pixels is used along with two new approximations for Gaussian pixels, which allow for multi-scale deconvolution. The algorithm is similar to the CLEAN algorithm in that it fits the reconstructed pixels in the image to the observed visibilities while minimising the total flux; however, unlike CLEAN, it operates on the ungridded visibilities, enforces positivity, and has guaranteed global convergence. The pixels in the image can be arbitrarily distributed and arbitrary gains between each pixel and each antenna element can also be specified.
Results. Direct deconvolution of the observed visibilities is shown to be feasible for several deconvolution problems, including a 1 megapixel wide-field image with over 400 000 visibilities. Correctness of the algorithm is shown using synthetic data, and the algorithm shows good image reconstruction performance for wide field images and requires no regridding of visibilities. Though this algorithm requires significantly more computation than methods based on the CLEAN algorithm, we demonstrate that it is trivially parallelisable across multiple GPUs and potentially can be scaled to GPU clusters. We also demonstrate that a significant speed up is possible through the use of multi-scale analysis using Gaussian pixels.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly