As Sebastien pointed out the COLT 2016 videos are out. Here is another one: Matus Telgarsky on Benefits of depth in neural networks.
Representation Benefits of Deep Feedforward Networks by Matus Telgarsky
This note provides a family of classification problems, indexed by a positive integerk , where all shallow networks with fewer than exponentially (ink ) many nodes exhibit error at least1/6 , whereas a deep network with 2 nodes in each of2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iteratedk times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.
No comments:
Post a Comment