[Updated 09/09/16: Please note that this manuscript has been withdrawn ]
Atlas mentioned his recent NIPS accepted paper where he and his colleagues build a deep architecture based on iterative solvers for matrix factorization, woohoo !
Our #nips2016 paper on #deeplearning as a stack of approximated #sparse models https://t.co/hV6OgIGtQ4 Long live the convergence!@IgorCarron— Atlas Wang (@atlaswang) 16 août 2016
Thanks Atlas for the Great Convergence mention. Here is the paper:
Stacked Approximated Regression Machine: A Simple Deep Learning Approach by Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, Thomas S. Huang
This paper proposes the Stacked Approximated Regression Machine (SARM), a novel, simple yet powerful deep learning (DL) baseline. We start by discussing the relationship between regularized regression models and feed-forward networks, with emphasis on the non-negative sparse coding and convolutional sparse coding models. We demonstrate how these models are naturally converted into a unified feed-forward network structure, which coincides with popular DL components. SARM is constructed by stacking multiple unfolded and truncated regression models. Compared to the PCANet, whose feature extraction layers are completely linear, SARM naturally introduces non-linearities, by embedding sparsity regularization. The parameters of SARM are easily obtained, by solving a series of light-weight problems, e.g., PCA or KSVD. Extensive experiments are conducted, which show that SARM outperforms the existing simple deep baseline, PCANet, and is on par with many state-of-the-art deep models, but with much lower computational loads.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment