Woohoo ! following up on a previous post, Joachim lets me know of the release of an implementation:
Hi Igor,
The library is now up. The name changed to McKernel. Thanks for your interest.
https://github.com/curto2/mckernelhttps://arxiv.org/pdf/1702.08159
Cheers,
Curtó
Thanks !
McKernel: A Library for Approximate Kernel Expansions in Log-linear Time by Joachim D. Curtó, Irene C. Zarza, Feng Yang, Alexander J. Smola, Fernando De La Torre, Chong-Wah Ngo, Luc Van Gool
Kernel Methods Next Generation (KMNG) introduces a framework to use kernel approximates in the mini-batch setting with SGD Optimizer as an alternative to Deep Learning. McKernel is a C++ library for KMNG ML Large-scale. It contains a CPU optimized implementation of the Fastfood algorithm that allows the computation of approximated kernel expansions in log-linear time. The algorithm requires to compute the product of Walsh Hadamard Transform (WHT) matrices. A cache friendly SIMD Fast Walsh Hadamard Transform (FWHT) that achieves compelling speed and outperforms current state-of-the-art methods has been developed. McKernel allows to obtain non-linear classification combining Fastfood and a linear classifier.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment