Pages

Friday, April 06, 2018

Blocked Direct Feedback Alignment: Exploring the Benefits of Direct Feedback Alignment

Interesting exploration of DFA concepts !


Blocked Direct Feedback Alignment:Exploring the Benefits of Direct Feedback Alignment by Mateo Espinosa Zarlenga, Eyvind Niklasson

Backpropagation is undoubtedly the preferred method for training deep feedforward neural networks. While this method has proven its effectiveness on applications ranging over a myriad of different fields, it has some well-known drawbacks. Moreover, this algorithm is arguably far from being biologically plausible, which makes it very unattractive as a crucial step of any attempt for an accurate model of our brain. Alternatives like feedback alignment and direct feedback alignment has then been proposed recently as possible methods that are more biologically plausible than backpropagation while also correcting some of the know drawbacks of this algorithm. For this project, we explore the uses of this last method, direct feedback alignment (DFA), by looking at variants of the same that could lead to improvements in both training convergence times and testing-time accuracies. We present two main variants: Feedback Propagation (FP) and Blocked Direct Feedback Alignment (BDFA). These variants of DFA attempt to find some sort of equilibrium between DFA and backpropagation that takes advantage of the benefits in both methods. In our experiments we manage to empirically show that BDFA outperforms both DFA and backpropagation in terms of convergence time and testing performance when used to train very deep neural networks with fully connected layers on MNIST and notMNIST. 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment