Friday, December 21, 2012

Random Projections for Support Vector Machines

So randomization is also helpful for classical ML classification, uh:




Let $\mathbf{X} \in \mathbb{R}^{n \times d}$ be a data matrix of rank $\rho$, representing $n$ points in $\mathbb{R}^d$. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix \mathbf{X}. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within \math{\epsilon}-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.


A few references have been featured here under the RandNLA tag  Check also the Randomized Numerical Linear Algebra page. The thesis of Saurabh Paul entitled Random Projections for Support Vector Machines is here.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly