Recently, Vikas Sindhwani talked about Random Embeddings, Matrix-valued Kernels and Deep Learning. As a follow-up, I mentioned this Book on The Discrepancy Method by Bernard Chazelle (thanks to Sebastien) that should allow us to make more appropriate choices in the Random Features space when approximating Kernel Machines.

Here is the actual paper on arxiv that Vikas mentioned in his talk (It had been mentioned here earlier as well): Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels by Jiyan Yang, Vikas Sindhwani, Haim Avron, Michael Mahoney

Here is the actual paper on arxiv that Vikas mentioned in his talk (It had been mentioned here earlier as well): Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels by Jiyan Yang, Vikas Sindhwani, Haim Avron, Michael Mahoney

We consider the problem of improving the efficiency of randomized Fourier feature maps to accelerate training and testing speed of kernel methods on large datasets. These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo (QMC) approximations instead, where the relevant integrands are evaluated on a low-discrepancy sequence of points as opposed to random point sets as in the Monte Carlo approach. We derive a new discrepancy measure called box discrepancy based on theoretical characterizations of the integration error with respect to a given sequence. We then propose to learn QMC sequences adapted to our setting based on explicit box discrepancy minimization. Our theoretical analyses are complemented with empirical results that demonstrate the effectiveness of classical and adaptive QMC techniques for this problem.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 1 comment:

After reading this paper (http://arxiv.org/ftp/arxiv/papers/1501/1501.01811.pdf), I am confused about CS. Any comments from Igor or other readers?

Post a Comment