As Sebastien pointed out the COLT 2016 videos are out. Here is another one: Matus Telgarsky on Benefits of depth in neural networks.
Representation Benefits of Deep Feedforward Networks by Matus Telgarsky
This note provides a family of classification problems, indexed by a positive integerk , where all shallow networks with fewer than exponentially (ink ) many nodes exhibit error at least1/6 , whereas a deep network with 2 nodes in each of2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iteratedk times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment