Much like one of the commenter on YouTube, I would have loved less questions during the presentation but it is a fascinating subject. From Guss' website:
- On Characterizing the Capacity of Neural Networks using Algebraic Topology. William H. Guss, Ruslan Salakhutdinov. Preprint. NIPS 2017, DLTP Workshop. [arxiv] [poster]
- Towards Neural Homology Theory. William H. Guss, Ruslan Salakhutdinov. Talk, Microsoft Research, 2018. [slides]
- Eigen: A Step Towards Conversational AI. William H. Guss, James Bartlett, Phillip Kuznetsov, Piyush Patil. Alexa Prize Proceedings 2017. [proceedings]
- Deep Function Machines: Generalized Neural Networks for Topological Layer Expression. William H. Guss. Preprint. [arXiv]
- Universal Approximation of Nonlinear Operators on Banach Space. William H. Guss. Machine Learning at Berkeley Research Symposium 2016. [pdf]
- Backpropagation-Free Parallel Deep Reinforcement Learning. William H. Guss. James Bartlett, Noah Golmant, Phillip Kuznetsov, Max Johansen. Preprint (WIP). [pdf]
- Parameter Reduction using Operator Neural Networks. William H. Guss. Microsoft Research Symposium 2016. Best Poster Award. [poster]
- Functional Neural Networks Evaluated by Weierstrass Polynomials. William H. Guss, Phillip Kuznetsov, Patrick Chen. Intel ISEF 2015. Pittsburgh, Pennsylvania. [AAAI’ Honorable Mention] [ASA’ Honorable Mention]
Here is the preprint: On Characterizing the Capacity of Neural Networks using Algebraic Topology by William H. Guss, Ruslan Salakhutdinov
The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision region is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions. This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for fully-connected neural networks.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment