Blind deconvolution is a big deal to whoever is involved in hardware and sensor calibration. What is calibration you say ? well for one, it is the action of determining the (hopefully linear) transfer function between experimental data and readings from some sensor. Calibration fits the definition of black art magic. By that I mean that a textbook approach generally focuses on a least square treatment when in fact, in the lab, you have all sorts of, shall we say charitably, non-gaussian errors. Ideally, in the linear case, we want to solve for A in:
Y = A X 
with X given as input and Y given as response from the sensor. It turns that often, one does not know X except that it has a certain structure or prior. This knowledge can be embedded as:
Y = (AD) C 
where D represents a dictionary of prior waveforms spanned by X rendering C a somewhat columnwise-sparse matrix. Blind deconvolution is about finding A and X given Y. Most often, in the literature, people will search for A with a certain structure (Toeplitz, ....) because the sensor already fits a certain model. In the case where X only through its structure, the calibration problem becomes a blind deconvolution one. Also, blind deconvolution rears its ugly head when knowing the response Y's to known X's still does not allow you to know A better.
It so happens that Phil Schniter just responded to an earlier entry that eventually bears on the subject:
I would like to mention that, for about a year now, we've been working on more general (e.g., rank larger than 1) versions of this idea, which we refer to as "Bilinear GAMP" or BiG-AMP, because the problem is that of recovering a pair of matrices (D,X) from a matrix observation of the form Y=L(D*X), where L(.) is some linear transformation, and thus the observations are bilinear in the unknowns. Notice that, by choosing appropriate priors on D and X, one can obtain algorithms for matrix completion, robust PCA, dictionary learning, or any combination of those problems (e.g., robust dictionary learning from incomplete observations). Moreover, all hyperparameters can be learned automatically using an EM similar procedure similar to that of EM-GM-AMP, and structured sparsity/amplitudes can be incorporated using a procedure similar to our "turbo AMP" work.
While preliminary results on BiG-AMP appeared at SPARS 2011 (http://www2.ece.ohio-state.
edu/~schniter/pdf/spars11_ calibration.pdf) and ITA 2012 (http://www2.ece.ohio-state. edu/~schniter/pdf/ita12_ slides.pdf), we have made a lot of progress recently and hope to release the full paper and matlab code within the next few weeks. We think you will be impressed with the results!
The two papers/posters are: Bilinear Generalized Approximate Message Passing (BiG-AMP) for Matrix Recovery Problems by Phil Schniter and Volkan Cevher
and Approximate Message Passing for Bilinear Models by Phil Schniter and Volkan Cevher
I am anxious to see some developments there as I want to see some of the speed shown in CS by the AMP algorithm flood the Matrix Factorization market. Meanwhile, the blind deconvolution literature also seems to show some progress, in particular in the compressive sensing case . The next paper features a blind deconvolution approach to the CASSI system, a coded aperture compressive hyperspectral imager: In this case, as in the second paper some structure is known about A through their Toeplitz structures.
Coded Hyperspectral Imaging and Blind Compressive Sensing by Ajit Rajwade, David Kittle, Tsung-Han Tsai, David Brady and Lawrence Carin. The abstract reads:
Blind compressive sensing (CS) is considered for reconstruction of hyperspectral data imaged by a coded aperture camera. The measurements are manifested as a superposition of the coded wavelengthdependent data, with the ambient three-dimensional hyperspectral datacube mapped to a two-dimensional measurement. The hyperspectral datacube is recovered using a Bayesian implementation of blind CS. Several demonstration experiments are presented, including measurements performed using a coded aperture snapshot spectral imager (CASSI) camera. The proposed approach is capable of efﬁciently reconstructing large hyperspectral datacubes, including hyperspectral video. Comparisons are made between the proposed algorithm and other techniques employed in compressive sensing, dictionary learning and matrix factorization.
Blind Deconvolution of Sparse Pulse Sequences under a Minimum Distance Constraint: A Partially Collapsed Gibbs Sampler Method by Georg Kail, Jean-Yves Tourneret, Franz Hlawatsch, and Nicolas Dobigeon. The abstract reads:
For blind deconvolution of an unknown sparse sequence convolved with an unknown pulse, a powerful Bayesian method employs the Gibbs sampler in combination with a Bernoulli-Gaussian prior modeling sparsity. In this paper, we extend this method by introducing a minimum distance constraint for the pulses in the sequence. This is physically relevant in applications including layer detection, medical imaging, seismology, and multipath parameter estimation. We propose a Bayesian method for blind deconvolution that is based on a modiﬁed Bernoulli-Gaussian prior including a minimum distance constraint factor. The core of our method is a partially collapsed Gibbs sampler (PCGS) that tolerates and even exploits the strong local dependencies introduced by the minimum distance constraint. Simulation results demonstrate signiﬁcant performance gains compared to a recently proposed PCGS. The main advantages of the minimum distance constraint are a substantial reduction of computational complexity and of the number of spurious components in the deconvolution result.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.