
While reading Learning Deep Architectures for AI by Yoshua Bengio. The abstract reads;
Theoretical results suggest that in order to learn the kind of complicated functions that can represent high level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers or in complicated propositional formulae re-using many sub-formulae. Searching the parameter space of deep architectures is a difficult task, but learning algorithms such as those for Deep Belief Networks have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This paper discusses the motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer models such as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks.I noted a nice discussion about compressive sensing in paragraph 7.1 entitled "Sparse Representations in Auto-Encoders and RBMs"
Credit: NASA / JPL / SSI / colorization by Gordan Ugarkovic, Tethys and Titan, Cassini captured a grayscale animation of Tethys crossing in front of Titan on October 17, 2009. In this version, Gordan Ugarkovic has colored in Titan based on its color as seen in previous Cassini photos.
No comments:
Post a Comment