It must be the second shallow network featured here recently and all use some sort of random projection techniques for their measurements: Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network by Mark D. McDonnell, Tony Vladusich
We present a neural network architecture and training method designed to enable very rapid training and low implementation complexity. Due to its training speed and very few tunable parameters, the method has strong potential for embedded hardware applications requiring frequent retraining or online training. The approach is characterized by (a) convolutional filters based on biologically inspired visual processing filters, (b) randomly-valued classifier-stage input weights, (c) use of least squares regression to train the classifier output weights in a single batch, and (d) linear classifier-stage output units. We demonstrate the efficacy of the method as an image classifier, obtaining state-of-the-art results on the MNIST (0.37% error) and NORB-small (2.2%) image classification databases, with very fast training times compared to standard deep network approaches. The network's performance on the Google Street View House Number (SVHN) (4%) database is also competitive with state-of-the art methods.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.