Tuesday, June 25, 2019

CfP: The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019) Princeton Club, New York City, Oct 31-Nov 1 2019.

** Nuit Blanche is now on Twitter: @NuitBlog **



Adam let me know of the following:

Dear Igor
I hope this email finds you well. Thank you for posing our conference on your blog earlier this year. As the deadline for submission is fast approaching (June 28th), I was hoping you would post once again. We have an amazing lineup of speakers (below) and I expect the subject matter is very much in line with your readership!
Cheers!

-Adam


Here is the announcement:

ANNOUNCEMENT
The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019)
Princeton Club, New York City, Oct 31-Nov 1 2019.
Web: https://www.deepmath-conference.com/
======= Important Dates =======
Submission deadline for 1-page abstracts: June 28, 2019Notification: TBA.
Conference: Oct 31-Nov 1 2019.
======= Speakers =======
Sanjeev Arora (Princeton University, Keynote Speaker), Anima Anandkumar (CalTech), Yasaman Bahri (Google), Minmin Chen (Google), Michael Elad (Technion), Surya Ganguli (Stanford), Tomaso Poggio (MIT), David Schwab (CUNY), Shai Shalev-Shwartz (Hebrew University), Haim Sompolinsky (Hebrew University and Harvard), and Naftali Tishby (Hebrew University).
======= Call for Abstracts =======
In addition to these high-profile invited speakers, we invite 1-page non-archival abstract submissions. Abstracts will be reviewed double-blind and presented as posters.
To complement the wealth of conferences focused on applications, all submissions for DeepMath 2019 must target theoretical and mechanistic understanding of the underlying properties of neural networks.
Insights may come from any discipline and we encourage submissions from researchers working in computer science, engineering, mathematics, neuroscience, physics, psychology, statistics, or related fields.
Topics may address any area of deep learning theory, including architectures, computation, expressivity, generalization, optimization, representations, and may apply to any or all network types including fully connected, recurrent, convolutional, randomly connected, or other network topologies.
======= Conference Topic =======
Recent advances in deep neural networks (DNNs), combined with open, easily-accessible implementations, have made DNNs a powerful, versatile method used widely in both machine learning and neuroscience. These advances in practical results, however, have far outpaced a formal understanding of these networks and their training. Recently, long-past-due theoretical results have begun to emerge, shedding light on the properties of large, adaptive, distributed learning architectures.
Following the success of the 2018 IAS-Princeton joint symposium on the same topic (https://sites.google.com/site/princetondeepmath/home), the 2019 meeting is more centrally located and broader in scope, but remains focused on rigorous theoretical understanding of deep neural networks.

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

No comments:

Printfriendly