Pages

Monday, August 29, 2016

Densely Connected Convolutional Networks

If neural networks are an instance of an iteration of a solver or a dynamical system, they seldom use more than previous iterates at every iteration. There is an on-going area of research that seems to be getting good (CIFAR) results by "remembering" some of these past iterates. From the paper:

Many recent publications address this or related problems. ResNets (He et al., 2015b) and Highway Networks (Srivastava et al., 2015) bypass signal from one layer to the next via identity connections. Stochastic Depth (Huang et al., 2016) shortens ResNets by randomly dropping layers during training to allow better information and gradient flow. Recently, Larsson et al. (2016) introduced FractalNets , which repeatedly combine several parallel layer sequences with different number of convolutional blocks to obtain a large nominal depth, while maintaining many short paths in the network. Although these different approaches vary in network topology and training procedure, we observe a key characteristic shared by all of them: they create short paths from earlier layers near the input to those later layers near the output. In this paper we propose an architecture that distills this insight into a simple and clean connectivity pattern. The idea is straight-forward, yet compelling: to ensure maximum information flow between layers in the network, we connect all layers directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own featuremaps to all subsequent layers





Densely Connected Convolutional Networks by Gao Huang, Zhuang Liu, Kilian Q. Weinberger
Recent work has shown that convolutional networks can be substantially deeper, more accurate and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper we embrace this observation and introduce the Dense Convolutional Network (DenseNet), where each layer is directly connected to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections, one between each layer and its subsequent layer (treating the input as layer 0), our network has L(L+1)/2 direct connections. For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. Our proposed connectivity pattern has several compelling advantages: it alleviates the vanishing gradient problem and strengthens feature propagation; despite the increase in connections, it encourages feature reuse and leads to a substantial reduction of parameters; its models tend to generalize surprisingly well. We evaluate our proposed architecture on five highly competitive object recognition benchmark tasks. The DenseNet obtains significant improvements over the state-of-the-art on all five of them (e.g., yielding 3.74% test error on CIFAR-10, 19.25% on CIFAR-100 and 1.59% on SVHN).


 An implementation is available on GitHub: https://github.com/liuzhuang13/DenseNet

see also the comments on Reddit.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment