Tuesday, November 15, 2016

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization / Self-Calibration via Linear Least Squares

 
 
 
Shuyang just sent me the following:
 
 
  Dear Igor,

I am a graduate student from UC Davis, working with Prof.Thomas Strohmer.
I am also a big fan of your blog Nuit Blanche and enjoy reading recent advances in compressive sensing, machine learning...

This year, our group has two papers on self-calibration and blind deconvolution. 

1. Rapid, robust and reliable blind deconvolution via nonconvex optimization, https://arxiv.org/pdf/1606.04933.pdf
This paper is about fast and provable nonconvex approach to solve blind deconvolution under subspace constraint. 

2. Self-calibration via linear least squares, https://arxiv.org/pdf/1611.04196v1.pdf
This work is more recent. We consider several interesting models of self-calibration problems and then solves them with simple linear least squares after proper transformation. The result comes with rigorous theoretic guarantees. 

I am wondering if it would be interesting to your audience. Thanks for your time and attention!

Best regards,
Shuyang

 Thanks Shuyang ! Here are the papers:
 

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization by Xiaodong Li, Shuyang Ling, Thomas Strohmer, Ke Wei

We study the question of reconstructing two signals f and g from their convolution y=fg. This problem, known as {\em blind deconvolution}, pervades many areas of science and technology, including astronomy, medical imaging, optics, and wireless communications. A key challenge of this intricate non-convex optimization problem is that it might exhibit many local minima. We present an efficient numerical algorithm that is guaranteed to recover the exact solution, when the number of measurements is (up to log-factors) slightly larger than the information-theoretical minimum, and under reasonable conditions on f and g. The proposed regularized gradient descent algorithm converges at a geometric rate and is provably robust in the presence of noise. To the best of our knowledge, our algorithm is the first blind deconvolution algorithm that is numerically efficient, robust against noise, and comes with rigorous recovery guarantees under certain subspace conditions. Moreover, numerical experiments do not only provide empirical verification of our theory, but they also demonstrate that our method yields excellent performance even in situations beyond our theoretical framework.


Self-Calibration via Linear Least Squares by Shuyang Ling, Thomas Strohmer

Whenever we use devices to take measurements, calibration is indispensable. While the purpose of calibration is to reduce bias and uncertainty in the measurements, it can be quite difficult, expensive and sometimes even impossible to implement. We study a challenging problem called self-calibration, i.e., the task of designing an algorithm for devices so that the algorithm is able to perform calibration automatically. More precisely, we consider the setup y=A(d)x+ϵ where only partial information about the sensing matrix A(d) is known and where A(d) linearly depends on d. The goal is to estimate the calibration parameter d (resolve the uncertainty in the sensing process) and the signal/object of interests x simultaneously. For three different models of practical relevance we show how such a bilinear inverse problem, including blind deconvolution as an important example, can be solved via a simple linear least squares approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly