Matthias just sent me the following:
Dear Igor,
We just finished work on a paper which discusses learning of cosparse analysis operator with separable structures. The main topics are the derivation of bounds on the sample complexity of analysis operator learning algorithms and the discussion of a learning procedure based on geometric stochastic gradient descent. A preprint is available at http://arxiv.org/abs/1503.02398
We thought that this paper might be of interest to the readers of your Nuit Blanche and would be glad if you could advertise it on your blog.
Best,
Matthias
Thanks Matthias. Let me just say that Analysis operator work sure looks to have some bearing on the issue of claissification in Machine Learning. more on that later. In the meantime, here is the paper: Learning Co-Sparse Analysis Operators with Separable Structures by Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber
In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm.
and earlier:
- Separable Cosparse Analysis Operator Learning by Matthias Seibert, Julian Wörmann, Rémi Gribonval, Martin Kleinsteuber
- Sample Complexity of Dictionary Learning and other Matrix Factorizations
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Seems to me that an example would be to identify a time-domain signal by the zeros of the Fourier spectrum.
ReplyDelete