Wednesday, March 14, 2012

Old and New algorithm for Blind Deconvolution

I just came across this series of video of the Machine Learning meets Computational Photography workshop at NIPS11. Of interest is the video by Yair Weiss on Old and New algorithm for Blind Deconvolution where I learned about using the Kurtosis and blind deconvolution,The presentation is here. A related paper is: Efficient Marginal Likelihood Optimization in Blind Deconvolution by Anat LevinYair Weiss, Fredo Durand and Bill Freeman. The abstract reads:.

In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k|y) and not only its mode. This leads to a distinction between MAPx,k strategies which estimate the mode pair x, k and often lead to undesired results, and MAPk strategies which select the best k while marginalizing over all possible x images. The MAPk principle is significantly more robust than the MAPx,k one, yet, it involves a challenging marginalization over latent images. As a result, MAPk techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAPk algorithm which involves only a modest modification of common MAPx,k algorithms. We show that MAPk can, in fact, be optimized easily, with no additional computational complexity.

The attendant code used in this paper is available.In his presentation Yair shows that his new algorithm is connected to another one on normalized sparsity:

which really points to this paper by Dilip KrishnanBlind Deconvolution Using a Normalized Sparsity Measure by  Dilip Krishnan , Terence Tay, Rob Fergus. The abstract reads:

Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of workaround methods are needed to yield good results (e.g. Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur

The attendant code for this paper is here.

One of the interesting thing in all this is the remark I made last year on the fact that this ratio of the l1 over the l2 norms used in some regularization approach [1] also happen to be a parameter used in the KGG necessary and sufficient condition for sparse recovery in compressive sensing [2]. I wonder if, at some point, making a connection between these subjects will yield a deeper understanding. 


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: