Mario just sent me the following
Somewhat related to a recent post in Nuit Blanche (http://nuit-blanche.blogspot.fr/2016/05/fista-and-maximal-sparsity-with-deep.html), we have been working on plugging "learned MMSE denoisers" into the ADMM loop for image restoration or compressive imaging. By "learned MMSE denoisers", I mean MMSE denoisers learned offline for a specific class of images (e.g., faces, text,...). This is just preliminary work, in what seems to be a promising direction. The paper http://arxiv.org/abs/1602.04052 will be presented at ICIP 2016.
Furthermore, by using a collection of trained/learned models, one may simultaneously restore/reconstruct and segment an image, as shown in http://arxiv.org/abs/1605.07003
Thanks Mario !
Image Restoration and Reconstruction using Variable Splitting and Class-adapted Image Priors by Afonso M. Teodoro, José M. Bioucas-Dias, Mário A. T. Figueiredo
This paper proposes using a Gaussian mixture model as a prior, for solving two image inverse problems, namely image deblurring and compressive imaging. We capitalize on the fact that variable splitting algorithms, like ADMM, are able to decouple the handling of the observation operator from that of the regularizer, and plug a state-of-the-art algorithm into the pure denoising step. Furthermore, we show that, when applied to a specific type of image, a Gaussian mixture model trained from an database of images of the same type is able to outperform current state-of-the-art methods.
Image Restoration with Locally Selected Class-Adapted Models by Afonso M. Teodoro, José M. Bioucas-Dias, Mário A. T. Figueiredo
In this paper, we propose an automatic way to select class-adapted Gaussian mixture priors, in image denoising or deblurring tasks. We follow the Bayesian perspective and use a maximum a posteriori criterion to determine the model that best explains each observed patch. In situations where it is not possible to learn a model from the observed image, e.g. blurred images, this approach, in general, leads to better results in comparison with using only a generic model. In particular, when the input image belongs to a class for which we have a pre-computed model, the improvement is more pronounced.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.