Friday, October 16, 2015

Gradient-based Hyperparameter Optimization through Reversible Learning ( Autograd implementation )

This morning I asked a question about 
 
 
Tomasz ‏ came back with this preprint:: Gradient-based Hyperparameter Optimization through Reversible Learning by Dougal Maclaurin, David Duvenaud, Ryan P. Adams

Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.
 The reddit comments are here . The Autograd implementation mentioned in the paper is at:
 
 
with examples using RNN and LTSMs at https://github.com/HIPS/autograd/tree/master/examples
 
h/t Tomasz
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly