Pages

Friday, December 16, 2016

Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery

During NIPS, I felt there were a sense that we are missing some clarity about what a good learning  architecture should be: learning to learn, hyperparameter search are all exciting areas but there is a sense we have not found the right way of thinking about these things. I met Scott at the poster session of the NIPS workshop on Interpretable Machine Learning for Complex Systems that was taking place in an hotel adjacent to the main event. His poster and the attendant paper below seem to provide some of guidance on how we should think of RNN architectures. It sure looks like an example of the great convergence. I note an interesting connection to Residual Networks and in turn I wonder how phase transitions and AMP solvers might have a bearing on future RNN structures.

 

 
Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery by Scott Wisdom, Thomas Powers, James Pitton, Les Atlas

Recurrent neural networks (RNNs) are powerful and effective for processing sequential data. However, RNNs are usually considered "black box" models whose internal structure and learned parameters are not interpretable. In this paper, we propose an interpretable RNN based on the sequential iterative soft-thresholding algorithm (SISTA) for solving the sequential sparse recovery problem, which models a sequence of correlated observations with a sequence of sparse latent vectors. The architecture of the resulting SISTA-RNN is implicitly defined by the computational structure of SISTA, which results in a novel stacked RNN architecture. Furthermore, the weights of the SISTA-RNN are perfectly interpretable as the parameters of a principled statistical model, which in this case include a sparsifying dictionary, iterative step size, and regularization parameters. In addition, on a particular sequential compressive sensing task, the SISTA-RNN trains faster and achieves better performance than conventional state-of-the-art black box RNNs, including long-short term memory (LSTM) RNNs.

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment