Using random projections to control the information flow from layer to layer in dictionary learning, this is what Zhen James Xiang seems to be saying in his NIPS11 presentation on Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries. The attendant MATLAB Toolbox is here and is featured in the Matrix Factorization Jungle Page. The paper is: Learning sparse representations of high dimensional data on large scale dictionaries by Zhen James Xiang , Hao Xu, Peter Ramadge. The abstract reads:
And some Supplemental material.Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can learn informative hierarchical sparse representations more efﬁciently.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.