Hi Igor,I am following your posts on the internet about matrix factorizations, random matrices, etc.In fact, one of my works (which is expected to publish soon) even appeared in your blog (Nuit Blanche) and this is how I was first exposed to it.Anyway, I have recently submitted for publication an algorithm for Randomized LU factorization, which is quite accurate, but perhaps more interesting is that it can fully run in parallel, makes it suitable for GPUs.I think it can also be added to your matrix factorization jungle page. What do you think?Here's a link to the paper:Gil
I responded to Gil that indeed it could be added to the Advanced Matrix Factorization page as long as some implementation was made available. Gil then kindly made it available on the interweb and it is now linked in the Advanced Matrix Factorization Jungle page. Thanks Gil. Here is the paper:
We present a fast randomized algorithm that computes a low rank LU decomposition. Our algorithm uses random projections type techniques to efficiently compute a low rank approximation of large matrices. The randomized LU algorithm can be parallelized and further accelerated by using sparse random matrices in its projection step. Several different error bounds are proven for the algorithm approximations. To prove these bounds, recent results from random matrix theory related to subgaussian matrices are used. As an application, we also show how the algorithm can be utilized to solve problems such as the rank-deficient least squares problem. Numerical examples, which illustrate the performance of the algorithm and compare it to other decomposition methods, are presented.The implementation is here.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.