Pages

Tuesday, December 17, 2013

XNV: Correlated random features for fast semi-supervised learning - implementation -

Following up on Randomization is not a dirty word, Brian McWilliams sent me the following:

Dear Igor,
Your posts over the past few years about random features and randomized algorithms have been very interesting and a great source of ideas.
I'd like to draw your attention to a paper that we had at NIPS this year which leverages random features to perform multi-view regression using canonical correlation analysis which I hope will be of interest to your audience.
In the presence of a large amount of unlabaled data, multi-view CCA regression is a way to leverage two distinct sets of features (or views) of data to perform semi-supervised learning by penalising uncorrelated features in the CCA basis. It was shown by Kakade and Foster (2007) that if a so-called multi-view assumption is fulfilled this leads to a nice bound on the generalisation error due to a potentially large reduction in variance. However, this assumption is rarely fulfilled in practise and so CCA regression has not really taken off. We make the observation that independently generating two sets of random features (we present results using both Nyström and Fourier features) automatically fulfils the multi-view assumption. In the paper we present a simple algorithm to perform semi-supervised learning and extensive results on publicly available datasets.
The paper is available here:
and some basic software is available here:
Cheers,
Brian
Thanks Brian !

Here is the paper: Correlated random features for fast semi-supervised learning by Brian McWilliams, David Balduzzi, Joachim M. Buhmann
This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised algorithm for regression and classification. The algorithm draws on two main ideas. First, it generates two views consisting of computationally inexpensive random features. Second, XNV applies multiview regression using Canonical Correlation Analysis (CCA) on unlabeled data to bias the regression towards useful features. It has been shown that, if the views contains accurate estimators, CCA regression can substantially reduce variance with a minimal increase in bias. Random views are justified by recent theoretical and empirical work showing that regression with random features closely approximates kernel regression, implying that random views can be expected to contain accurate estimators. We show that XNV consistently outperforms a state-of-the-art algorithm for semi-supervised learning: substantially improving predictive performance and reducing the variability of performance on a wide variety of real-world datasets, whilst also reducing runtime by orders of magnitude.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment