Pages

Monday, April 15, 2019

CfP: 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019)


Like last yearAdam sent me the following several days ago:
Dear Igor,

I hope all is well with you. I was just contacting you to let you know about the announcement for the 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019). We're very excited for this event and I think it would be of interest to yourself and to many of your readers/followers. It would be great if you could propagate the announcement below so that we can really bring together researchers working on this important topic!

Warmest regards,

-Adam

-----------------------
Adam S. Charles
Post-doctoral Associate
Princeton Neuroscience Institute
Princeton, NJ, 08544
The videos of the previous installment were featured here. Here is the announcement for this years':

-------------------------------------
ANNOUNCEMENT AND CALL FOR CONTRIBUTIONS

The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019) Princeton Club, New York City, Oct 31-Nov 1 2019. Web: https://www.deepmath-conference.com/

======= Important Dates =======
Submission deadline for 1-page abstracts: June 28, 2019
Notification: TBA.
Conference: Oct 31-Nov 1 2019.

======= Call for abstracts =======
In addition to these high-profile invited speakers, we invite 1-page non-archival abstract submissions. Abstracts will be reviewed double-blind and presented as posters.

To complement the wealth of conferences focused on applications, all submissions for DeepMath 2019 must target theoretical and mechanistic understanding of the underlying properties of neural networks.
Insights may come from any discipline and we encourage submissions from researchers working in computer science, engineering, mathematics, neuroscience, physics, psychology, statistics, or related fields.

Topics may address any area of deep learning theory, including architectures, computation, expressivity, generalization, optimization, representations, and may apply to any or all network types including fully connected, recurrent, convolutional, randomly connected, or other network topologies.

======= Confirmed speakers =======
Anima Anandkumar (CalTech), Yasaman Bahri (Google), Minmin Chen (Google), Michael Elad (Technion), Surya Ganguli (Stanford), Tomaso Poggio (MIT), David Schwab (CUNY), Shai Shalev-Shwartz (Hebrew University), Haim Sompolinsky (Hebrew University and Harvard), and Naftali Tishby (Hebrew University).

======= Workshop topic =======
Recent advances in deep neural networks (DNNs), combined with open,  easily-accessible implementations, have made DNNs a powerful, versatile method used widely in both machine learning and neuroscience. These advances in practical results, however, have far outpaced a formal understanding of these networks and their training. Recently, long-past-due theoretical results have begun to emerge, shedding light on the properties of large, adaptive, distributed learning architectures.

Following the success of the 2018 IAS-Princeton joint symposium on the same topic (https://sites.google.com/site/princetondeepmath/home), the 2019 meeting is more centrally located and broader in scope, but remains focused on rigorous theoretical understanding of deep neural networks. 



Join the CompressiveSensing subreddit or the Facebook page and post there !

No comments:

Post a Comment