At ICLR, I noted these figures below that tells the story of the need for certain operations in neural networks as a function of their depth location:
Designing Neural Network Architectures using Reinforcement Learning by Bowen Baker, Otkrist Gupta, Nikhil Naik, Ramesh Raskar
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Q-learning with an ϵ-greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
Models found by MetaQNN are located here: https://bowenbaker.github.io/metaqnn/
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
It would be interesting to see this with Resnets too.
ReplyDeleteThere is also an algorithm tsunami, not just a data one!!!
ReplyDeleteAnother possibility would be to do computational self-assembly of neural nets.
http://www.exa.unicen.edu.ar/escuelapav/cursos/bio/l21.pdf