In a previous entry (Sunday Morning Insight: The Linear Boltzmann Equation and Co-Sparsity), we mentioned that anytime you were describing a field that followed some sort of physical law, like
Lu = 0
L being the Boltzmann or the Maxwell operator, and you added some boundary conditions, then you were in effect making a statement on co-sparsity: i.e. the non-zero elements representing either some boundary conditions or a better approximation of the full operator (Linear Boltzmann replacing the Diffusion operator at the boundaries). This is profound because it connects the generic work happening in sampling to the real world of physics and engineering (see structured life)
Unless I am mistaken this the third dictionary learning implementation released in the wild dedicated to learning the Analysis Operator, in effect learning the equivalent discretization of the operator of interest with its attendant boundary conditions. The first two were featured in Noise Aware Analysis Operator Learning for Approximately Cosparse Signals and in 90% missing pixels and you reconstructed that ?! Analysis Operator Learning and Its Application to Image Reconstruction. The paper illustrating what this new solver can do is: Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model by Ron Rubinstein, Tomer Peleg and Michael Elad
The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator – hereafter referred to as the analysis dictionary – multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms – the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that deļ¬nes the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Regarding an article you linked in a previous entry (flexible wireless systems), there is a presentation-like simplified explanation here:
ReplyDeletehttps://www.ntt-review.jp/archive/ntttechnical.php?contents=ntr201103ra2.html