Pages

Monday, November 28, 2016

Faster Kernel Ridge Regression Using Sketching and Preconditioning / Sharper Bounds for Regression and Low-Rank Approximation with Regularization


Faster Kernel Ridge Regression Using Sketching and Preconditioning by Haim Avron, Kenneth L. Clarkson, David P. Woodruff

Random feature maps, such as random Fourier features, have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods such as kernel ridge regression. However, random feature maps only provide crude approximations to the kernel function, so delivering state-of-the-art results requires the number of random features to be very large. Nevertheless, in some cases, even when the number of random features is driven to be as large as the training size, full recovery of the performance of the exact kernel method is not attained. In order to address this issue, we propose to use random feature maps to form preconditioners to be used in solving kernel ridge regression to high accuracy. We provide theoretical conditions on when this yields an effective preconditioner, and empirically evaluate our method and show it is highly effective for datasets of up to one million training examples.

Sharper Bounds for Regression and Low-Rank Approximation with Regularization by Haim Avron, Kenneth L. Clarkson, David P. Woodruff

The technique of matrix sketching, such as the use of random projections, has been shown in recent years to be a powerful tool for accelerating many important statistical learning techniques. Research has so far focused largely on using sketching for the "vanilla" un-regularized versions of these techniques. 
Here we study sketching methods for regularized variants of linear regression, low rank approximations, and canonical correlation analysis. We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear. The statistical dimension is always smaller than the rank, and decreases as the amount of regularization increases. In particular, for the ridge low-rank approximation problem minY,XYXA2F+λY2F+λX2F, where YRn×k and XRk×d, we give an approximation algorithm needing 
O(nnz(A))+O~((n+d)ε1kmin{k,ε1sdλ(Y)})+O~(ε8sdλ(Y)3)
time, where sλ(Y)k is the statistical dimension of Y, Y is an optimal Y, ε is an error parameter, and nnz(A) is the number of nonzero entries of A. 
We also study regularization in a much more general setting. For example, we obtain sketching-based algorithms for the low-rank approximation problem minX,YYXA2F+f(Y,X) where f(,) is a regularizing function satisfying some very general conditions (chiefly, invariance under orthogonal transformations).

Image Credit: NASA/JPL-Caltech/Space Science Institute
W00102494.jpg was taken on 2016-11-25 17:59 (UTC) and received on Earth 2016-11-26 04:45 (UTC). The camera was pointing toward Saturn, and the image was taken using the MT3 and CL2 filters.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment