Pages

Friday, June 16, 2017

FreezeOut: Accelerate Training by Progressively Freezing Layers - implementation -

What initially looks like playing with hyperparameters brings new life to a somewhat older approach. From Alex's tweet:







The early layers of a deep neural net have the fewest parameters, but take up the most computation. In this extended abstract, we propose to only train the hidden layers for a set portion of the training run, freezing them out one-by-one and excluding them from the backward pass. We empirically demonstrate that FreezeOut yields savings of up to 20% wall-clock time during training with 3% loss in accuracy for DenseNets on CIFAR.


DenseNet is at: http://github.com/bamos/densenet.pytorch 
while FreezeOut is here: http://github.com/ajbrock/FreezeOut 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

No comments:

Post a Comment