In the ROKS2013 Extended abstracts list and the SAHD Program Agenda, there are several presentations that mention kernels as a means of learning or performing nonlinear compressive sensing. At SPARS, we have Kernel Compressive Sensing by Farhad Pourkamali Anaraki. I also note that, at ROKS, the deeper learning trend is coming to Kernels as well. See for instance:
- Deep-er Kernels, John Shawe-Taylor (University College London) Video [abstract]
- Deep Support Vector Machines for Regression Problems, M.A. Wiering, M. Schutten, A. Millea, A. Meijster and L.R.B. Schomaker, Institute of Artif. Intell. and Cognitive Eng., Univ. of Groningen, Video [abstract]
A small note on yesterday's note: Let me just say what I pointed out back in 2007 ( Compressed Sensing in the Primary Visual Cortex ?), one could read specifically in the supplemental to A feedforward architecture accounts for rapid categorization by Thomas Serre, Aude Oliva, and Tomaso Poggio that:
"..simple units receive only a subset of the possible afferent units (selected at random) such that nSk < NSk × NSk..."
which looks nothing less than the DropConnect model (an instance of dropout method that has gained substantial recognition lately see Improving neural networks by preventing co-adaptation of feature detectors by Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever,Ruslan R. Salakhutdinov) I mentioned this past Sunday in Sunday's Morning Insight: Faster Than a Blink of an Eye and featured in Regularization of Neural Networks using DropConnect by Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun and Rob Fergus (DropConnect CUDA Code and page is here. )
And finally, here is another presentation at ROKS that triggered my interest as it is at the border between sensing and machine learning (see Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning) : On a link between kernel mean maps and Fraunhofer diffraction, with an application to super-resolution beyond the diffraction limit by Stefan Harmeling, Michael Hirsch, Bernhard Schölkopf
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
From note 5:
5 This provides a physical interpretation of the kernel as the point response of an optical system. This kind of interpretation can be bene cial also for other systems, and indeed it is suggested by the view of kernels as Green's functions [16, 24]: the kernel k can be viewed as the Green's function of P P, where P is a regularization operator such that the RKHS norm can be written as kfkk = kP fk. For instance, the Gaussian kernel corresponds to a regularization operator which computes an infinite series of derivatives of f.
I can't wait to see how Random Kitchen Sinks could fit in these deeper architectures and how they could map to physical systems.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment