When I wrote about A Quick Panorama of Sensing from Direct Imaging to Machine Learning, I was merely showing a parallel between two approaches. But a year later, we are now seeing another aspect of The Great Convergence: Two ways to go about a similar task (blind deconvolution), the compressive sensing approach determining sample complexity (how many images must be taken for the task) and eventually the transfer function with different arguments using concentration of measures results and the empirically based convolutional neural network approach using stacked encoders, lots of data and one sample picture to figure out the transfer function. What is interesting is that there is some sort of convergence on the priors in both approaches which I think is the most important part of the convergence process. Without further ado:

A connection between matrix completion and blind deconvolution, outstanding !

Lifting for Blind Deconvolution in Random Mask Imaging: Identifiability and Convex Relaxation by Sohail Bahmani, Justin Romberg

In this paper we analyze the blind deconvolution of an image and an unknown blur in a coded imaging system. The measurements consist of a subsampled convolution of an unknown blurring kernel with multiple random binary modulations (coded masks) of the image. To perform the deconvolution, we consider a standard lifting of the image and the blurring kernel that transforms the measurements into a set of linear equations of the matrix formed by their outer product. Any rank-one solution to this system of equation provides a valid pair of an image and a blur.

We first express the necessary and sufficient conditions for the uniqueness of a rank-1 solution under some additional assumptions (uniform subsampling and no limit on the number of coded masks). These conditions are special case of a previously established result regarding identifiability in the matrix completion problem. We also characterize a low-dimensional subspace model for the blur kernel that is sufficient to guarantee identifiability, including the interesting instance of "bandpass" blur kernels.

Next, we show that for the bandpass model for the blur kernel, the image and the blur kernel can be found using nuclear norm minimization. Our main results show that recovery is achieved (with high probability) when the number of masks is on the order of μlog2LlogLeμloglog(N+1) where μ is the \emph{coherence} of the blur, L is the dimension of the image, and N is the number of measured samples per mask.

Learning to Deblur by Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf

We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment