Hi,
The following may interest you. (I don’t recall having seen it on Nuit Blanche, Google gives me nothing, sorry if I’ve missed it). There seems to be some big progress on understanding why deep learning work as can be seen in this lecture entitled Signal and Image Classification by Stephane Mallat.
(Note: Yann Lecun seems to agree)
Impossibly cool keynote talk by Stephane Mallat at CVPR about the theory behind convolutional nets.
— Yann LeCun (@ylecun) June 26, 2014
Learning a good contraction to adapt a scattering transform can be seen as learning a sparse representation (l1/l2 mixed norm). The paper about randomly permuted MNIST mentioned at the end of talk is:
We introduce a deep scattering network, which computes invariants with iterated contractions adapted to training data. It defines a deep convolution network model, whose contraction properties can be analyzed mathematically. A cascade of wavelet transform convolutions are computed with a multirate filter bank, and adapted with permutations. Unsupervised learning of permutations optimize the contraction directions, by maximizing the average discriminability of training data. For Haar wavelets, it is solved with a polynomial complexity pairing algorithm. Translation and rotation invariance learning is shown with classification experiments on hand-written digits.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Wow, what an amazing talk. It makes me connect it to Adcock and Hansens's notion of asymptotic sparsity as well as tree-structured sparsity.
ReplyDelete