Pages

Thursday, January 05, 2012

SubMF changes to DCF, something new in ReProCS and Martin Jaggi's thesis and code.

In September it used to be called SubMF it is now DFC (from Divide-and-Conquer Matrix Factorization by Lester Mackey, Ameet Talwalkar, Michael I. Jordan). it's been changed in the Matrix Factorization Jungle page






A large class of video sequences are composed of at least two layers – the foreground, which is a sparse image that often consists of one or more moving objects, and the background, which is a dense image, that is either constant or changes gradually over time and the changes are usually global. Thus the background sequence is well modeled as lying in a low dimensional subspace that can slowly change with time; while the foreground is well modeled as a sparse “outlier” that changes in a correlated fashion over time (e.g., due to objects’ motion). Video layering can thus be posed as a robust principal components’ analysis (PCA) problem, with the difference that the “outlier” for PCA is also a signal-of-interest and needs to be recovered too. Real-time video layering then becomes a recursive robust PCA problem. Most existing recursive, or even batch, robust PCA algorithms fail when (a) there are too many nonzero foreground pixels; or when (b) the foreground pixels are significantly correlated spatially or temporally (in almost all cases foreground objects will not jump around randomly); or when (c) the foreground intensity is quite similar to that of the background. We propose a novel solution framework called Recursive Projected Compressive Sensing (ReProCS) that ensures robustness to all three issues.

Finally, Martin Jaggi let me know his thesis is out:

hi igor: Just wanted to let you know that finally my thesis came online, just in case:
or alternatively here is the page (also linking to the content you already mentioned on the  Matrix Factorization Jungle page. )

Here is the thesis: Sparse Convex Optimization Methods for Machine Learning by Martin Jaggi. The abstract reads:

Convex optimization is at the core of many of today's analysis tools for large datasets, and in particular machine learning methods. In this thesis we will study the general setting of optimizing (minimizing) a convex function over a compact convex domain. In the rst part of this thesis, we study a simple iterative approximation algorithm for that class of optimization problems, based on the classical method by Frank & Wolfe. The algorithm only relies on supporting hyperplanes to the function that we need to optimize. In each iteration, we move slightly towards a point which (approximately) minimizes the linear function given by the supporting hyperplane at the current point, where the minimum is taken over the original optimization domain. In contrast to gradient-descent-type methods, this algorithm does not need any projection steps in order to stay inside the optimization domain. Our framework generalizes the sparse greedy algorithm of Frank & Wolfe and its recent primal-dual analysis by Clarkson (and the low-rank SDP approach by Hazan) to arbitrary compact convex domains. Analogously, we give a convergence proof guaranteeing "-small error | which in our context is the duality gap | after O(1") iterations.This method allows us to understand the sparsity of approximate solutions for any `1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation quality. Here we obtain matching upper and lower bounds of 1" for the sparsity. The same bounds apply to low-rank semide finite optimization with bounded trace, showing that rank O1"is best possible here as well.For some classes of geometric optimization problems, our algorithm has a simple geometric interpretation, which is also known as the coreset concept. Here we will study linear classi ers such as support vector machines (SVM) and perceptrons, as well as general distance computations between convex hulls (or polytopes). Here the framework will allow us to understand the sparsity of SVM solutions, here being the number of support vectors, in terms of the required approximation quality.



If you recall,  Martin  made his implementation available on his page. Thanks Martin and Congratulations Dr. Jaggi !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

1 comment:

  1. I finally got around to reading Jaggi's thesis and his paper on Frank-Wolfe. Wow, it is really eye opening how Martin managed to integrate many areas and techniques under one unifying concept of FW and a simple duality gap surrogate.

    I highly recommend it.

    Cheers.
    Manny Ko

    ReplyDelete