## Page Views on Nuit Blanche since July 2010

My papers on ArXiv:
Approximating Kernels at the speed of Light
&
Imaging with Nature

LightOn
LinkedIn (557)|| on CrunchBase || our Blog
(2403)
(3984)
(13335)||
Attendant references pages:
The Advanced Matrix Factorization Jungle Page ||

Paris Machine Learning
@Meetup.com (7800 members) || @Archives

## Thursday, June 06, 2019

### The low-rank eigenvalue problem

** Nuit Blanche is now on Twitter: @NuitBlog **

Since most Big Data Matrices are Approximately Low Rank then there is a computationally efficient way of computing its eigenvalues. Advanced matrix Factorizations are coming to eigenvalue problems. What if instead of randomized SVDs as suggested in the paper, one were to devise specific matrix factorizations that make BA very simple/easy to compute. Something like subspace clustering, dictionnary learning, binary/boolean factorization for instance. In another direction, what if the initial data matrix were to be non-normal, how does this sort approximation hold ?

The lowYuji Nakatsukasa
The nonzero eigenvalues of AB are equal to those of BA: an identity that holds as long as the products are square, even when A,B are rectangular. This fact naturally suggests an efficient algorithm for computing eigenvalues and eigenvectors of a low-rank matrix X=AB with A,BTCN×r,Nr: form the small r×r matrix BA and find its eigenvalues and eigenvectors. For nonzero eigenvalues, the eigenvectors are related by ABv=λvBAw=λw with w=Bv, and the same holds for Jordan vectors. For zero eigenvalues, the Jordan blocks can change sizes between AB and BA, and we characterize this behavior.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.