I note the following from the conclusions:
Based on our experiments, random projection of the input data points onto a low-dimensional space withp0= 10 or smaller dimensions yields very accurate low-rank approximations. In fact, the accuracy of our proposed method is very close to the best rank-r approximation obtained by the exact eigenvalue decomposition or SVD of kernel matrices
Randomized Clustered Nystrom for Large-Scale Kernel Machines by Farhad Pourkamali-Anaraki, Stephen Becker
The Nystrom method has been popular for generating the low-rank approximation of kernel matrices that arise in many machine learning problems. The approximation quality of the Nystrom method depends crucially on the number of selected landmark points and the selection procedure. In this paper, we present a novel algorithm to compute the optimal Nystrom low-approximation when the number of landmark points exceed the target rank. Moreover, we introduce a randomized algorithm for generating landmark points that is scalable to large-scale data sets. The proposed method performs K-means clustering on low-dimensional random projections of a data set and, thus, leads to significant savings for high-dimensional data sets. Our theoretical results characterize the tradeoffs between the accuracy and efficiency of our proposed method. Extensive experiments demonstrate the competitive performance as well as the efficiency of our proposed method.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.