ACDC: A Structured Efficient Linear Layer by Marcin Moczulski, Misha Denil, Jeremy Appleyard, Nando de Freitas
The linear layer is one of the most pervasive modules in deep learning representations. However, it requires
O(N2)parameters and O(N2)operations. These costs can be prohibitive in mobile applications or prevent scaling in many domains. Here, we introduce a deep, differentiable, fully-connected neural network module composed of diagonal matrices of parameters, Aand D, and the discrete cosine transform C. The core module, structured as ACDC−1, has O(N)parameters and incurs O(NlogN)operations. We present theoretical results showing how deep cascades of ACDC layers approximate linear layers. ACDC is, however, a stand-alone module and can be used in combination with any other types of module. In our experiments, we show that it can indeed be successfully interleaved with ReLU modules in convolutional neural networks for image recognition. Our experiments also study critical factors in the training of these structured modules, including initialization and depth. Finally, this paper also provides a connection between structured linear transforms used in deep learning and the field of Fourier optics, illustrating how ACDC could in principle be implemented with lenses and diffractive elements.
An implementation of ACDC is available at: https://github.com/mdenil/acdc-torch
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.