Pages

Wednesday, September 07, 2011

Matrix Factorization This Week.

First, we start today with a different approach to computing most nuclear norm minimization and other convex optimization problems with Convex Optimization without Projection Steps by Martin Jaggi. The abstract reads:
We study the general problem of minimizing a convex function over a compact convex domain. We will investigate a simple iterative approximation algorithm that does not need projection steps in order to stay inside the optimization domain. Instead of a projection step, the linearized problem defined by a current subgradient is solved, which gives a step direction that will naturally stay in the domain. The approach generalizes the sparse greedy algorithm of Clarkson (and the low-rank SDP solver by Hazan) to arbitrary convex domains, and to using subgradients for the case of non-differentiable convex functions. Analogously, we give a convergence proof guaranteeing {\epsilon}-small duality gap after O(1/{\epsilon}) iterations. The framework allows us understand the sparsity of approximate solutions for any l1-regularized convex optimization problem, expressed as a function of the approximation quality. We obtain matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for l1-problems. The same bounds apply to low-rank optimization with bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well. As a third application, we obtain sparse matrices of O(1/{\epsilon}) non-zero entries as {\epsilon}-approximate solutions when optimizing any convex function over a class of diagonally dominant symmetric matrices. We show that the proposed first-order optimization technique also applies to nuclear norm and max-norm matrix optimization problems. For nuclear norm regularized optimization problems such as matrix completion and low-rank recovery, we demonstrate the practical efficiency and scalability of our algorithm for large matrix problems, as e.g. the Netflix dataset. For convex optimization over bounded matrix max-norm, our algorithm is the first with a convergence guarantee, to the best of our knowledge.
After talking to Martin , he reminded me of the following earlier paper: A Simple Algorithm for Nuclear Norm Regularized Problems by Martin JaggiMarek Sulovsk y. The abstract reads:
Optimization problems with a nuclear norm regularization, such as e.g. low norm matrix factorizations, have seen many applications recently. We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008). The experimental e ciency of our method is demonstrated on large matrix completion problems such as the Netflix dataset. The algorithm comes with strong convergence guarantees, and can be interpreted as a rst theoretically justi ed variant of Simon-Funk-type SVD heuristics. The method is free of tuning parameters, and very easy to parallelize.
If you want to try it out, Martin made a java implementation available here. This paper/implementation has been added to the Matrix Factorization Jungle page. From the website:
Implementation: The .java file below contains the algorithm we used for the experiments, using the power-method as the internal eigenvector procedure. The code is not yet completely self-contained, as the input data (the known/observed entries of the matrix which we want to complete) come in different formats for different datasets. The algorithm just assumes that all known entries are stored in a sequential list already. The algorithm never stores a full matrix, and therefore is able to scale to e.g. the size of the Netflix dataset.
Thanks Martin !

In the sorts of problems reminiscent of those tackled by Stefano Marchesini (he is mentioned in the acknowledgments), here is an approach to these problems as low rank/matrix completion issues. This time, the solver seems to be using nuclear norm minimization as a proxy to the rank functional even though one of the authors ( Yonina Eldar ) had pointed out that this specific proxy was not working for another area of optical imaging. Over time, I predict we are going to have specialized proxies for different areas of physics and engineering, ain't that going to be swell :-). Without further due, here is: Phase Retrieval via Matrix Completion by Emmanuel Candes, Yonina Eldar, Thomas Strohmer, Vlad Voroninski. The abstract reads:
This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that any complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to recover.
We present a new approach to robustly solve the photometric stereo problem. We cast the problem of recovering surface normals from images taken under multiple lighting conditions as a one of recovering a low-rank matrix from missing and corrupted observations, which model many different non-Lambertian effects such as shadows and specularities. Unlike previous approaches that use least-squares or heuristic robust techniques, our method uses an advanced convex optimization technique that is guaranteed to find the correct low-rank matrix by simultaneously fixing its missing and erroneous entries. Extensive experimental results demonstrate that our method achieves unprecedentedly accurate estimates of surface normals in the presence of significant amount of shadows and specularities. The new technique can be used to improve virtually any existing photometric stereo method. We not only showcase its effectiveness in the case of photometric stereo, but also extend our method to perform classification of photometric factors. In contrast to existing robust techniques, our method is computationally more efficient and provides theoretical guarantees for robustness to large errors.


The attendant site is here

This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

The low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, system identification and control. This class of optimization problems is $NP$-hard and a popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend the concept of $s$-goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski [Math Program, 2011]) to linear transformations in LMR. Then, we give characterizations of $s$-goodness in the context of LMR. Using the two characteristic $s$-goodness constants, ${\gamma}_s$ and $\hat{\gamma}_s$, of a linear transformation, not only do we derive necessary and sufficient conditions for a linear transformation to be $s$-good, but also provide sufficient conditions for exact and stable $s$-rank matrix recovery via the nuclear norm minimization under mild assumptions. Moreover, we give computable upper bounds for one of the $s$-goodness characteristics which leads to verifiable sufficient conditions for exact low-rank matrix recovery.


Blind Multiband Spectrum Signals Reconstruction by Hao Shen, Thomas Arildsen, Deepaknath Tandur, and Torben Larsen. The abstarct reads:
This paper investigates sparse sampling techniques applied to downsampling and interference detection for multiband radio frequency (RF) signals. To reconstruct a signal from sparse samples is a compressive sensing problem. This paper compares three different reconstruction algorithms: 1) l_1 minimization; 2) greedy pursuit; and 3) MUltiple SIgnal Classification (MUSIC). We compare the performance of these algorithms and investigate the robustness to noise effects. Characteristics and limitations of each algorithm are discussed.



For those of you interested, there is a conference around Zurich at the end of the month entitled High-Dimensional Problems in Statistics, September 19 - 23, 2011, that features some of the work in recent matrix factorizations


Credit photo: NASA, Apollo 14 tracks and remains on the moon as photographed by the LRO mission. More here on LRO.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

No comments:

Post a Comment