You know what happens when Random Matrices Are Too Damn Large and so yuuuggge, they cannot be held in memory nor processed ? well, the answer is to fake them till you make it. I like how the authors used the phase transition diagrams to show the similarity of two algorithms.
How to Fake Multiply by a Gaussian Matrix by Michael Kapralov, Vamsi K. Potluru, David P. Woodruff
Have you ever wanted to multiply ann×d matrixX , withn≫d , on the left by anm×n matrixG~ of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomizedm×n matrixT , for which one can computeT⋅X in onlyO(nnz(X))+O~(m1.5⋅d3) time, for which the total variation distance between the distributionsT⋅X andG~⋅X is as small as desired, i.e., less than any positive constant. Herennz(X) denotes the number of non-zero entries ofX . Assumingnnz(X)≫m1.5⋅d3 , this is a significant savings over the na\"iveO(nnz(X)m) time to computeG~⋅X . Moreover, since the total variation distance is small, we can provably useT⋅X in place ofG~⋅X in any application and have the same guarantees as if we were usingG~⋅X , up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
1 comment:
https://archive.org/details/bitsavers_mitreESDTe69266ANewMethodofGeneratingGaussianRando_2706065
Post a Comment