ANNOUNCEMENT AND CALL FOR CONTRIBUTIONS
The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019)
Princeton Club, New York City, Oct 31-Nov 1 2019.
Web: https://www.deepmath-conference.com/
======= Important Dates =======
Submission deadline for 1-page abstracts: June 28, 2019
Notification: TBA.
Conference: Oct 31-Nov 1 2019.
======= Confirmed speakers =======
Anima Anandkumar (CalTech), Yasaman Bahri (Google), Minmin Chen (Google),
Michael Elad (Technion), Surya Ganguli (Stanford), Tomaso Poggio (MIT),
David Schwab (CUNY), Shai Shalev-Shwartz (Hebrew University),
Haim Sompolinsky (Hebrew University and Harvard), and Naftali Tishby (Hebrew University).
======= Workshop topic =======
Recent advances in deep neural networks (DNNs), combined with open,
easily-accessible implementations, have made DNNs a powerful, versatile method
used widely in both machine learning and neuroscience. These advances in
practical results, however, have far outpaced a formal understanding of these
networks and their training. Recently, long-past-due theoretical results have
begun to emerge, shedding light on the properties of large, adaptive,
distributed learning architectures.
Following the success of the 2018 IAS-Princeton joint symposium on the same
topic (https://sites.google.com/site/princetondeepmath/home), the 2019 meeting
is more centrally located and broader in scope, but remains focused on rigorous
theoretical understanding of deep neural networks.
======= Call for abstracts =======
In addition to these high-profile invited speakers, we invite 1-page
non-archival abstract submissions. Abstracts will be reviewed double-blind and
presented as posters.
To complement the wealth of conferences focused on applications, all submissions
for DeepMath 2019 must target theoretical and mechanistic understanding of the
underlying properties of neural networks.
Insights may come from any discipline and we encourage submissions from
researchers working in computer science, engineering, mathematics, neuroscience,
physics, psychology, statistics, or related fields.
Topics may address any area of deep learning theory, including architectures,
computation, expressivity, generalization, optimization, representations, and
may apply to any or all network types including fully connected, recurrent,
convolutional, randomly connected, or other network topologies.
Committee Information:
Organizing committee:
Ahmed El Hady
Adam Charles
Mikio Aoi
Andrew Saxe
Joan Bruna
Michael Shvartsman
Sebastian Musslick
Stephen Keeley
Advisory committee
Jonathan Cohen
Daniel Lee
Sebastian Seung
Ted Willke
Local Committee
NYU: Joan Bruna
Columbia: Ashok Kumar
SUNY Stonybrook: Il Memming Park
Yale: Gal Mishne
City College: David Schwab
IAS: Nadav Cohen