This paper shows that a judicious incremental use of Random Features seems to work as if one were to choose several regularization Thikonov factors. This is interesting: Generalization Properties of Learning with Random Features by Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco
We study the generalization properties of regularized learning with random features in the statistical learning theory framework. We show that optimal learning errors can be achieved with a number of features smaller than the number of examples. As a byproduct, we also show that learning with random features can be seen as a form of regularization, rather than only a way to speed up computations.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
1 comment:
Thanks for the post. in fact the incremental approach is just a plus. The number of features always plays the role of a regularization parameter. Most importantly this is the first proof that you can get optimal rates with a number of features smaller than the number of points. These results complement those for Nystrom like sampling http://arxiv.org/abs/1507.04717.
Post a Comment