Pages

Friday, February 12, 2016

BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 - implementation -

Very interesting in terms of eventual hardware implementation and in line with what we seem to know that usual architectures are redundant:


BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 by Matthieu Courbariaux, Yoshua Bengio

We introduce BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters' gradient. We show that it is possible to train a Multi Layer Perceptron (MLP) on MNIST and ConvNets on CIFAR-10 and SVHN with BinaryNet and achieve nearly state-of-the-art results. At run-time, BinaryNet drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. We wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST MLP 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for BinaryNet is available.
 The implementation is here: https://github.com/MatthieuCourbariaux/BinaryNet/tree/master/Train-time

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment