Thursday, June 06, 2019

The low-rank eigenvalue problem

** Nuit Blanche is now on Twitter: @NuitBlog **


Since most Big Data Matrices are Approximately Low Rank then there is a computationally efficient way of computing its eigenvalues. Advanced matrix Factorizations are coming to eigenvalue problems. What if instead of randomized SVDs as suggested in the paper, one were to devise specific matrix factorizations that make BA very simple/easy to compute. Something like subspace clustering, dictionnary learning, binary/boolean factorization for instance. In another direction, what if the initial data matrix were to be non-normal, how does this sort approximation hold ?


The low-rank eigenvalue problem by Yuji Nakatsukasa
The nonzero eigenvalues of AB are equal to those of BA: an identity that holds as long as the products are square, even when A,B are rectangular. This fact naturally suggests an efficient algorithm for computing eigenvalues and eigenvectors of a low-rank matrix X=AB with A,BTCN×r,Nr: form the small r×r matrix BA and find its eigenvalues and eigenvectors. For nonzero eigenvalues, the eigenvectors are related by ABv=λvBAw=λw with w=Bv, and the same holds for Jordan vectors. For zero eigenvalues, the Jordan blocks can change sizes between AB and BA, and we characterize this behavior.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

No comments:

Printfriendly