Pages

Thursday, December 15, 2011

#NIPS2011 : Learning Sparse Representations of High Dimensional Data on Large Scale Dictionaries


This one paper is about both matrix factorization and one aspect of compressive sensing but I am featuring it alone here because unlike most NIPS submissions, the authors provide an implementation of their code. This is the norm in the compressive sensing community, it needs to be the norm in the generic area of  Matrix Factorization.

Learning sparse representations on data adaptive dictionaries is a state-of-the-art method for modeling data. But when the dictionary is large and the data dimension is high, it is a computationally challenging problem. We explore three aspects of the problem. First, we derive new, greatly improved screening tests that quickly identify codewords that are guaranteed to have zero weights. Second, we study the properties of random projections in the context of learning sparse representations. Finally, we develop a hierarchical framework that uses incremental random projections and screening to learn, in small stages, a hierarchically structured dictionary for sparse representations. Empirical results show that our framework can
learn informative hierarchical sparse representations more efficiently
The Supplemental material is here. The attendant MATLAB Toolbox will evidently be featured in the Matrix Factorization Jungle page.




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

2 comments:

  1. Just a quick note: tough its not documented, the code for this paper, as written, requires access to the Mathworks Statistics Toolbox.

    ReplyDelete
  2. Good catch Eric, I'll let the authors know if you don't.

    Igor

    ReplyDelete