You probably recall this entry 90% missing pixels and you reconstructed that ?! Analysis Operator Learning and Its Application to Image Reconstruction. Simon Hawe let me know that the attendant implementation has now been made available. If I count right, this GOAL implementation is the third being made available after Noise Aware AOL (NAAOL) and Analysis K-SVD, for learning analysis operators. Don't think for one second it could be used only for imaging applications. And then there are all the theoretical issues that spring up once the analysis operator has been found ( see recent summary presentation of Deanna Needell's "Synthesis and analysis type methods for signal reconstruction from random observations"). On a different note, It also looks like there is a second version to the preprint which has been accepted for publication:
Analysis Operator Learning and Its Application to Image Reconstruction by Simon Hawe, Martin Kleinsteuber, Klaus Diepold. The abstract reads:
The implementation of GOAL is here.Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator.In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.
Of related interest:
- Noise Aware Analysis Operator Learning for Approximately Cosparse Signals -implementation-
- Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model - implementation -
- A Comment on Learning Analysis Operators
- NIPS2012 Workshop on Analysis Operator Learning vs. Dictionary Learning: Fraternal Twins in Sparse Modeling
- Sunday Morning Insight: The Linear Boltzmann Equation and Co-Sparsity
Image Credit: NASA/JPL-Caltech
This image was taken by Navcam: Left A (NAV_LEFT_A) onboard NASA's Mars rover Curiosity on Sol 199 (2013-02-26 17:32:40 UTC).
Full Resolution
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment