Initially we were wondering why and how a linear sampling mechanism could be twisted so that we would have simple solvers to reconstruct an initial set of data. The we looked at more complex reconstruction solvers. Then we heard that deep learning using a nonlinear sampling mechanism could in fact do very well in reconstructing initial sets of data ( see Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning )
If you want to have a real sense of the great convergence that is currently happenning in Science and Engineering, you really want to pay attention to the folks that have been in compressive sensing and advanced matrix factorization fora little while and that are trying to make sense of the neural network architectures. Recently, Rich Baraniuk told us about A Probabilistic Theory of Deep Learning, today we have people like Guillermo Sapiro and Larry Carin looking into these architectures, this is significant:
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
A Generative Model for Deep Convolutional Learning
If you want to have a real sense of the great convergence that is currently happenning in Science and Engineering, you really want to pay attention to the folks that have been in compressive sensing and advanced matrix factorization fora little while and that are trying to make sense of the neural network architectures. Recently, Rich Baraniuk told us about A Probabilistic Theory of Deep Learning, today we have people like Guillermo Sapiro and Larry Carin looking into these architectures, this is significant:
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
Two important properties of a classification machinery are: (i) the system preserves the important information of the input data; (ii) the training examples convey information for unseen data; and (iii) the system is able to treat differently points from different classes. In this work we show that these fundamental properties are inherited by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Similar points at the input of the network are likely to have the same The theoretical analysis of deep networks here presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure; and provide bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with state-of-the-art trained networks.
A Generative Model for Deep Convolutional Learning
A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.
Credit Photo: Cropped and processed single frame NAVCAM image of Comet 67P/C-G taken on 15 April 2015 from a distance of 165 km to the comet centre. Credits: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment