Sparse Coding, Canonical Correlation and Dictionary Learning are Matrix Factorization operations. They are used in a variety of ways in building Deep Neural architectures. Here are a few I noticed in the 500 submissions at the ICLR 2017 conference that are in the open review process.
- Understanding Neural Sparse Coding with Matrix Factorization Thomas Moreau, Joan Bruna
- Energy-Based Spherical Sparse Coding Bailey Kong, Charless C. Fowlkes
- Support Regularized Sparse Coding and Its Fast Encoder Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang
- Transformational Sparse Coding Dimitrios C. Gklezakos, Rajesh P. N. Rao
- NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD Sahil Garg, Irina Rish, Guillermo Cecchi, Aurelie Lozano
- Deep Variational Canonical Correlation Analysis Weiran Wang, Xinchen Yan, Honglak Lee, Karen Livescu
- Differentiable Canonical Correlation Analysis Matthias Dorfer, Jan Schlüter, Gerhard Widmer
- Deep Generalized Canonical Correlation Analysis Adrian Benton, Huda Khayrallah, Biman Gujral, Drew Reisinger, Sheng Zhang, Raman Arora
Credit photo: NASA/JPL/University of Arizona
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment