BTW we've just released a @TensorFlow implementation https://t.co/DAXKjnXnu2#TensorizingNeuralNetworks #iclr2016— Alexander Novikov (@SashaVNovikov) 3 mai 2016
From the Github page:
Here is the paper we had mentioned earlier: Tensorizing Neural Networks by Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov
TensorNet
This is a TensorFlow implementation of the Tensor Train layer (TT-layer) of a neural network. In short, the TT-layer acts as a fully-connected layer but is much more compact and allows to use lots of hidden units without slowing down the learning and inference.
For the additional information see the following paper:
Tensorizing Neural Networks
Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov; In Advances in Neural Information Processing Systems 28 (NIPS-2015) [arXiv].
Please cite it if you write a scientific paper using this code.
Here is the paper we had mentioned earlier: Tensorizing Neural Networks by Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment