In Heuristics for Rank Proxy and how it changes everything...., I mentioned how some folks found out that log-det was a better heuristics than the nuclear norm for a proxy to the rank functional when searching for low rank solutions. In Information Theoretical Clustering via Semideļ¬nite Programming, Meihong Wang and Fei Sha are arguing against the log-det heuristic. In the same vein, in Limitations of Matrix Completion via Trace Norm Minimization, Xiaoxiao Shi and Philip S. Yu argue about the shortfall of the trace norm approach ( All is not well with the Nuclear norm world...). All these attempts that are quantifying the usefulness of some proxy are useful until you realize that there is a flurry of approaches to rank minimization either using other proxies for the rank functional or different approximations to the low rank problematic, here are the few I have noticed so far (comments are welcome):
- IRLS-M: Low-rank matrix recovery via iteratively reweighted least squares minimization by Massimo Fornasier and Holger Rauhut,Rachel Ward.
- RTRMC : A Riemannian trust-region method for low-rank matrix completion by Nicolas Boumal and Pierre-Antoine Absil
- Rank Minimization and Applications in System Theory by Maryam. Fazel, H. Hindi, and Stephen Boyd
- Two Proposals for Robust PCA Using Semidefinite Programming by MichaleI Mccoy and Joel Tropp
- Rank/norm regularization with closed-form solutions: Application to subspace clustering by Yao-Liang Yu and Dale Schuurmans.
- Approximation of rank function and its application to the nearest low-rank correlation matrix by Shujun Bi and Shaohua Pan.
- Rank Minimization via Online Learning by Raghu Meka, Prateek Jain, Constantine Caramanis, Inderjit S. Dhillon
- Clustered low rank approximation of graphs in information science applications by Berkant Savas, Inderjit S. Dhillon
One of the reason of the success of the Donoho-Tanner phase transition stems not only because it shows some uniformity of results across different measurement matrices, but also because of its ability to directly compare different solvers/approaches to sparse recovery. As an example, the latest result on sparse recovery became obviously important when it could directly be compared to the then "gold standard" of the Donoho-Tanner phase transition (see also the results of Bob Sturm)
Similar phase transitions have been observed for these low rank matrix recovery problems. Shouldn't there be some sort of standard against which most algorithm implementations ought to compare their results to so that we collectively devise a better mental picture of what is really working ? Because right now to be honest, I can't. Why is this important ? The message passing algorithms that have pretty made a dent against the L_1 recovery used in compressive sensing, are now, unsurprisingly, applied to rank minimization [1]. If there is revolution in rank minimization and nobody hears about it, is this a revolution ?
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
2 comments:
Another norm with some good theoretical and empirical improvement (matrix completion / netflix type problems) over the nuclear norm is the max norm, which has gotten attention mostly through the work of
Nati Srebro .
Thanks Vicente,
I nearly forgot about that one
http://arxiv.org/pdf/1102.3923v2
Cheers,
Igor
Post a Comment