Pages

Saturday, December 29, 2007

Compressed Sensing: Random Features for Large-Scale Kernel Machines


In a previous, I mentioned the potential connection between compressed sensing and the visual cortex through a model that uses random projections. At the latest NIPS conference, it looks like we are beginning to see some convergence in Random Features as an Alternative to the Kernel Trick by Ali Rahimi and Benjamin Recht. The matlab code is on the main webpage. The abstract reads:

To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel
machines.
Thanks Jort. So now the classification mechanism is actually simplified through the use of random projections. This is thought provoking.

No comments:

Post a Comment