Yann mentioned it on its twitter feed, the videos and slides of the IPAM workshop on New Deep Learning Techniques is out. Enjoy !
Samuel Bowman (New York University)
Toward natural language semantics in learned representations

Emily Fox (University of Washington)
Interpretable and Sparse Neural Network Time Series Models for Granger Causality Discovery

Ellie Pavlick (University of Pennsylvania)
Should we care about linguistics?

Leonidas Guibas (Stanford University)
Knowledge Transport Over Visual Data
Yann LeCun (New York University)
Public Lecture: Deep Learning and the Future of Artificial Intelligence

Alán Aspuru-Guzik (Harvard University)
Generative models for the inverse design of molecules and materials

Daniel Rueckert (Imperial College)
Deep learning in medical imaging: Techniques for image reconstruction, super-resolution and segmentation

Kyle Cranmer (New York University)
Deep Learning in the Physical Sciences

Stéphane Mallat (École Normale Supérieure)
Deep Generative Networks as Inverse Problems

Michael Elad (Technion - Israel Institute of Technology)
Sparse Modeling in Image Processing and Deep Learning

Yann LeCun (New York University)
Public Lecture: AI Breakthroughs & Obstacles to Progress, Mathematical and Otherwise

Xavier Bresson (Nanyang Technological University, Singapore)
Convolutional Neural Networks on Graphs

Federico Monti (Universita della Svizzera Italiana)
Deep Geometric Matrix Completion: a Geometric Deep Learning approach to Recommender Systems

Joan Bruna (New York University)
On Computational Hardness with Graph Neural Networks

Jure Leskovec (Stanford University)
Large-scale Graph Representation Learning

Arthur Szlam (Facebook)
Composable planning with attributes

Yann LeCun (New York University)
A Few (More) Approaches to Unsupervised Learning

Sanja Fidler (University of Toronto)
Teaching Machines with Humans in the Loop
Raquel Urtasun (University of Toronto)
Deep Learning for Self-Driving Cars
Pratik Chaudhari (University of California, Los Angeles (UCLA))
Unraveling the mysteries of stochastic gradient descent on deep networks

Stefano Soatto (University of California, Los Angeles (UCLA))
Emergence Theory of Deep Learning

Tom Goldstein (University of Maryland)
What do neural net loss functions look like?

Stanley Osher (University of California, Los Angeles (UCLA))
New Techniques in Optimization and Their Applications to Deep Learning and Related Inverse Problems

Michael Bronstein (USI Lugano, Switzerland)
Deep functional maps: intrinsic structured prediction for dense shape correspondence

Sainbayar Sukhbaatar (New York University)
Deep Architecture for Sets and Its Application to Multi-agent Communication

Zuowei Shen (National University of Singapore)
Deep Learning: Approximation of functions by composition

Wei Zhu (Duke University)
LDMnet: low dimensional manifold regularized neural networks

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
No comments:
Post a Comment