Wednesday, August 24, 2016

How to Fake Multiply by a Gaussian Matrix

You know what happens when Random Matrices Are Too Damn Large and so yuuuggge, they cannot be held in memory nor processed ? well, the answer is to fake them till you make it. I like how the authors used the phase transition diagrams to show the similarity of two algorithms.

How to Fake Multiply by a Gaussian Matrix by Michael Kapralov, Vamsi K. Potluru, David P. Woodruff

Have you ever wanted to multiply an n×d matrix X, with nd, on the left by an m×n matrix G~ of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized m×n matrix T, for which one can compute TX in only O(nnz(X))+O~(m1.5d3) time, for which the total variation distance between the distributions TX and G~X is as small as desired, i.e., less than any positive constant. Here nnz(X) denotes the number of non-zero entries of X. Assuming nnz(X)m1.5d3, this is a significant savings over the na\"ive O(nnz(X)m) time to compute G~X. Moreover, since the total variation distance is small, we can provably use TX in place of G~X in any application and have the same guarantees as if we were using G~X, up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: