A few people have already written some great insights to what happened there. Here are a few that struck me:
- With the astounding success of Deep Learning algorithms, other communities of science have essentially yielded to these tools in a manner of two or three years. I felt that the main question at the meeting was: which field would be next ? Since the Machine Learning/Deep Learning community was able to elevate itself thanks to high quality datasets such as MNIST all the way to Imagenet, it is only fair to see where this is going with the release of a few datasets during the conference including the Universe from OpenAI. Control systems and simulators (forward problems in science) seem the next target.
- The recent developments in deep learning have come in large part because most algorithms implementations have been made available by their respective authors. This is new and probably the reason why older findings have not resonated with the community (and the reason for the rift between some figures in the field). A paper is a paper, it becomes an idea worth building on when you don't spend all your time re-coding that paper.
- The touching tribute to David McKay brought home that we are not as unidimensional as we think we are.
- There are certain sub-communities within NIPS that still do not seem to have high quality datasets. I fear they will remain in the backseat for a little while longer. As in compressive sensing before phase transitions were found, any published paper was really just a meeting of a random dataset with a particular algorithm and no certain way to figure out how that algorithm fitted with the rest. High quality datasets, much like phase transitions, act as acid tests.
- I am always dumbfounded to find out that people read Nuit Blanche. I know the stats, it doesn't take away the genuine element of surprise. Wow, and thank you !
- Energy issues were bubbling up a little bit in different areas stemming from training large hyperparameter searches or learning-to-learn models but also in how to extract information from the brain.
- The meeting was big. Upon coming back home, I had a few: "What ? you were there too ?" moments
- I bet with someone that it would take more than 20 years to come up with a theoretical understanding of some of the recipes used currently in ML/DL. It took longer for L_1 and sparsity.
Here are some insightful take-aways: Tomasz pointed out some of the trends:
NIPS 2016 trends: learning-to-learn, GANification of X, Reinforcement learning for 3D navigation, RNNs, and Creating/Selling AI companies— Tomasz Malisiewicz (@quantombone) 10 décembre 2016
Jack Clark's newsletter before, during and after NIPS:
- 12/12/2016 - Import AI: Technology versus globalization, AI snake oil, and more computationally efficient resnets
- 12/08/2016 - Import AI: NIPSAGEDDON special edition, with DeepMind, Uber, and Visual Sentinels
- 12/05/2016 - Import AI: OpenAI reveals its Universe, DeepMind figures out catastrophic forgetting, and beware the 'Sabbath Mode'
Paul Mineiro's Machined Learnings: NIPS 2016 Reflections and Jeremy Karnowski and Ross Fadely, Insight Artificial Intelligence
During the meeting, on Twitter, the Post-facto Fake News Challenge was launched.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment