Hi Igor,
Thanks for posting about our recent workshop --- The "Institute for Advanced Study - Princeton University Joint Symposium on 'The Mathematical Theory of Deep Neural Networks'" --- last month. I just wanted to follow up and let you know that for those that missed the live-stream, we have put videos of all the talks up online:
https://www.youtube.com/playlist?list=PLWQvhvMdDChyI5BdVbrthz5sIRTtqV6Jw
I hope you and your readers enjoy!
Cheers,
-Adam
----------------------------Adam CharlesPost-doctoral associatePrinceton Neuroscience InstitutePrinceton, NJ, 08550
Thanks Adam ! Here are the videos:
9:10 Adam Charles: Introductory remarks
2
56:17 Sanjeev Arora: Why do deep nets generalize, that is, predict well on unseen data
3
59:34 Sebastian Musslick: Multitasking Capability vs Learning Efficiency in Neural Network Architectures
4
48:01 Joan Bruna: On the Optimization Landscape of Neural Networks
5
59:44 Andrew Saxe: A theory of deep learning dynamics: Insights from the linear case
6
51:13 Anna Gilbert: Toward Understanding the Invertibility of Convolutional Neural Networks
7
1:03:57 Nadav Cohen: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment