Wednesday, January 29, 2014

Additions to the Advanced Matrix Factorization Jungle Page



I am slow but I eventually took the hint from Francis Bach that many of the advanced methods used nowadays essentially boil down to some matrix Factorization or decomposition. Everybody who has done some linear algebra has heard about QR factorization and the famous Lapack subroutines that are powering most of the science and engineering workflows. However, in recent years, we have seen a slew of factorizations that added constraints on the factors of these known factorizations. 

I was pleasantly surprised to see recently that the Advanced Matrix Factorization Jungle had about 50 G+ "likes" something that puts it in between the Big Picture in Compressive Sensing (8 G+) and Nuit Blanche (118 G+). This got me thinking about how to extend a little bit the listing.

First, I have added compressive sensing as a subset of MMV
  • Multiple Measurement Vector (MMV) Y = A X with unknown X and rows of X are sparse.
    • Compressive Sensing, Y = A X with unknown X and rows of X are sparse, X is one column.
Also after watching the presentation of Rene Vidal [1] at the last First French-German Mathematical Image Analysis Conference, it struck me that I had not added his and others subspace clustering work. I also did not include the recent Random Kitchen Sinks approach, so I added these two sections:
  • Kernel Factorizations 
  • Subspace Clustering
Indeed for the Kernel Factorization, we are talking no less than the Random Kitchen Sinks, i.e. implementation of the separation of variables in kernel learning taking place in machine learning which also include all the Fast Mutipole methods (FMM) used in physics for the past twenty years. Let us note that the FMM changed our world substantially, let's bet that the same concept of variable separation does the same in the next few years in the Machine Learning area. In particular, if you step back a little, you'll notice that there is currently no FMM or Kernel factorization that puts an additional constraint on the factors of the decomposition. Think about it, does the FMM or the Random Kitchen sinks changes as result of knowing that the distributions of unknown are sparse, group-sparse and so on....And then there is the recent case that matrix factorization and its bounds might provide us with a clear insight on how to build nonlinear representation of the identity (neural networks) [2] :-) 

Here is what I added (subject to heavy modification),

Kernel Factorizations, Positive Kernel K(x,x') = G(f(x)*f'(x'))
Phase Transitions:

  • None so far.
Implementations:


Subspace Clustering
Phase Transitions:

  • None so far.

Implementations:
  • Sparse Q_ic_i , X_ic_i = 0, 1^Tc_i = 1,  C is unknown, Y is known; Sparse Manifold Clustering and Embedding by Ehsan Elhamifar, Rene Vidal, Sparse Manifold Clustering and Embedding (SMCE) is an algorithm based on sparse representation theory for clustering and dimensionality reduction of data lying in a union of nonlinear manifolds.



No comments:

Printfriendly