Hi Igor,Thanks Ali !
Just wanted to share our recent paper on pruning neural networks, which makes a strong connection with the compressed sensing literature:
- Paper: "Fast convex pruning of deep neural networks", https://arxiv.org/abs/1806.06457- Code + Implementation instructions: https://dnntoolbox.github.io/Net-Trim/
Thanks;
-Ali
Fast Convex Pruning of Deep Neural Networks by Alireza Aghasi, Afshin Abdi, Justin Romberg
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network. The method is a convex post-processing module, which prunes (sparsifies) a trained network layer by layer, while preserving the internal responses. We present a comprehensive analysis of Net-Trim from both the algorithmic and sample complexity standpoints, centered on a fast, scalable convex optimization program. Our analysis includes consistency results between the initial and retrained models before and after Net-Trim application and guarantees on the number of training samples needed to discover a network that can be expressed using a certain number of nonzero terms. Specifically, if there is a set of weights that uses at mosts terms that can re-create the layer outputs from the layer inputs, we can find these weights fromO(slogN/s) samples, whereis the input size. These theoretical results are similar to those for sparse regression using the Lasso, and our analysis uses some of the same recently-developed tools (namely recent results on the concentration of measure and convex analysis). Finally, we propose an algorithmic framework based on the alternating direction method of multipliers (ADMM), which allows a fast and simple implementation of Net-Trim for network pruning and compression. N
No comments:
Post a Comment