One of the seminar of Yann Lecun's series of classes at College de France includes seminars by other speakers. Today, here is the video of Optimisation et entrainement de réseaux récurrents by Yann Ollivier. The video is in French but the paper on which it relies is not. It is here: Practical Riemannian Neural Networks by Gaétan Marceau-Caron, Yann Ollivier
An implementation is on Gaetan's page: https://www.lri.fr/~marceau/code/riemaNNv1.zip
We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannian gradient descents for neural networks introduced in [Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a previously unpublished electroencephalogram dataset. The quasi-diagonal Riemannian algorithms consistently beat simple stochastic gradient gradient descents by a varying margin. The computational overhead with respect to simple backpropagation is around a factor2 . Perhaps more interestingly, these methods also reach their final performance quickly, thus requiring fewer training epochs and a smaller total computation time.
We also present an implementation guide to these Riemannian gradient descents for neural networks, showing how the quasi-diagonal versions can be implemented with minimal effort on top of existing routines which compute gradients.
An implementation is on Gaetan's page: https://www.lri.fr/~marceau/code/riemaNNv1.zip
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment