Solution of linear ill-posed problems using random dictionaries by Pawan Gupta, Marianna Pensky
In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. In the context of regression problems, there have been enormous amount of effort to recover an unknown function using such dictionaries. One of the most popular methods, Lasso and its versions, is based on minimizing empirical likelihood and, unfortunately, requires stringent assumptions on the dictionary, the, so called, compatibility conditions. While these conditions may be satisfied for the functions in the original dictionary, they usually do not hold for their images due to contraction imposed by the linear operator. Pensky (2016) showed that this difficulty can be bypassed by inverting each of the dictionary functions and matching the resulting expansion to the true function. However, even then the approach requires a compatibility condition which is difficult to check. In the present paper, we propose a solution which utilizes structured and unstructured random dictionaries, the technique that have not been applied so far to the solution of ill-posed linear inverse problems. We put a theoretical foundation under the suggested methodology and study its performance via simulations.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment