From the recent Simons Institute event on Information Theory, Learning and Big Data here is another new approach to sparse coding
Simple, Efficient and Neural Algorithms for Sparse Coding by Ankur Moitra,
Sparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization. There has been considerable recent work on designing algorithms for sparse coding with provable guarantees, but somewhat surprisingly these simple heuristics outperform them in practice. Here we give a general framework for understanding alternating minimization which we leverage to analyze existing heuristics and to design new ones also with provable guarantees.
We study this problem in a natural generative model, and obtain a variety of new algorithmic results: We give the first neurally plausible algorithm — closely related to the original heuristic of Olshausen and Field — that (provably) converges to a globally optimal sparse code. We also give the first algorithm for sparse coding that works almost up to the information theoretic limit for sparse recovery on incoherent dictionaries. All previous algorithms that approached or surpassed this limit run in time exponential in some natural parameter. Finally, our algorithms improve upon the sample complexity of existing approaches. We believe that our framework will have applications beyond sparse coding, and could be used to show that simple, iterative algorithms can be powerful in other contexts as well by suggesting new ways to analyze them.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.