Pages

Friday, July 05, 2013

Fast greedy algorithm for subspace clustering from corrupted and incomplete data and Optimization Online this month:

Alexander Petukhov just sent me the following

Dear Igor,

I want to attract attention of Nuit Blanche readers to a new paper submitted one month ago to Arxiv.
I think I'll be able to prepare the Matlab code in the beginning of the Fall.

Best regards,
Alex
Thanks Alex, I look forward to the implementation. Here is the paper: Fast greedy algorithm for subspace clustering from corrupted and incomplete data by Alexander Petukhov, Inna Kozlov

We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm providing an efficient method for clustering data belonging to a few low-dimensional linear or affine subspaces. The main difference of our algorithm from predecessors is its ability to work with noisy data having a high rate of erasures (missed entries with the known coordinates) and errors (corrupted entries with unknown coordinates). We discuss here how to implement the fast version of the greedy algorithm with the maximum efficiency whose greedy strategy is incorporated into iterations of the basic algorithm. 
We provide numerical evidences that, in the subspace clustering capability, the fast greedy algorithm outperforms not only the existing state-of-the art SSC algorithm taken by the authors as a basic algorithm but also the recent GSSC algorithm. At the same time, its computational cost is only slightly higher than the cost of SSC. 
The numerical evidence of the algorithm significant advantage is presented for a few synthetic models as well as for the Extended Yale B dataset of facial images. In particular, the face recognition misclassification rate turned out to be 6-20 times lower than for the SSC algorithm. We provide also the numerical evidence that the FGSSC algorithm is able to perform clustering of corrupted data efficiently even when the sum of subspace dimensions significantly exceeds the dimension of the ambient space.


From Optimization Online this month, here are a few papers of interest:

Hui Zhang(hhuuii.zhang***at***gmail.com)
Ming Yan(yanm***at***ucla.edu)
Wotao Yin(wotao.yin***at***rice.edu)

Abstract: The l1-synthesis and l1-analysis models recover structured signals from their undersampled measurements. The solution of the former model is often a sparse sum of dictionary atoms, and that of the latter model often makes sparse correlations with dictionary atoms. This paper addresses the question: when can we trust these models to recover specific signals? We answer the question with a necessary and sufficient condition that guarantees the recovery to be unique and exact and that also guarantees the recovery is robust in presence of measurement noise. The condition is one-for-all in the sense that it applies to both of the l1-synthesis and l1-analysis models, and to both of their constrained and unconstrained formulations. Furthermore, a convex infinity-norm program is introduced for numerically verifying the condition. The comparison with related existing conditions are included.

Euhanna Ghadimi (euhanna***at***kth.se)
AndrĂ© Teixeira (andretei***at***kth.se)
Iman Shames (iman.shames***at***unimleb.edu.au)
Mikael Johansson (mikaelj***at***kth.se)

Abstract: The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l2-regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.


Kimon Fountoulakis(K.Fountoulakis***at***sms.ed.ac.uk)
Jacek Gondzio(J.Gondzio***at***ed.ac.uk)

Abstract: In this paper a second-order method for the solution of large-scale strongly convex L1-regularized problems is developed. The proposed method is a Newton-CG (Conjugate Gradients) algorithm with backtracking line-search embedded in a doubly-continuation scheme. Worst-case iteration complexity of the proposed Newton-CG is established. Based on the analysis of Newton-CG; worst-case iteration complexity of the doubly-continuation scheme is obtained. Numerical results are presented on large-scale problems for the doubly-continuation Newton-CG algorithm, which show that the proposed second-order method competes favourably with state-of-the-art first-order methods.

Zhaosong Lu(zhaosong***at***sfu.ca)
Lin Xiao(lin.xiao***at***microsoft.com)

Abstract: In this paper we propose a randomized block coordinate non-monotone gradient (RBCNMG) method for minimizing the sum of a smooth (possibly nonconvex) function and a block-separable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability distribution and typically solves several associated proximal subproblems that usually have a closed-form solution, until a certain sufficient reduction on the objective function is achieved. We show that the sequence of expected objective values generated by this method converges to the expected value of the limit of the objective value sequence produced by a random single run. Moreover, the solution sequence generated by this method is arbitrarily close to an approximate stationary point with high probability. When the problem under consideration is convex, we further establish that the sequence of expected objective values generated by this method converges to the optimal value of the problem. In addition, the objective value sequence is arbitrarily close to the optimal value with high probability. We also conduct some preliminary experiments to test the performance of our RBCNMG method on the $\ell_1$-regularized least-squares problem. The computational results demonstrate that our method substantially outperform the randomized block coordinate descent method proposed in Richtarik and Takac (Math Programming 2012).

Francesco Rinaldi (rinaldi***at***math.unipd.it)
Margherita Porcelli (margherita.porcelli***at***isti.cnr.it)

Abstract: The problem of finding sparse solutions to underdetermined systems of linear equations is very common in many fields like e.g. signal/image processing and statistics. A standard tool for dealing with sparse recovery is the $\ell_1$-regularized least-squares approach that has been recently attracting the attention of many researchers. In this paper, we describe a new version of the two-block nonlinear constrained Gauss-Seidel algorithm for solving $\ell_1$-regularized least-squares that at each step of the iteration process fixes some variables to zero according to a simple rule. We prove the global convergence of the method and we report numerical results on some test problems showing the efficiency of the implemented algorithm.

Yair Censor(yair***at***math.haifa.ac.il)
Ioana Pantelimon(ioanapantelimon***at***gmail.com)
Constantin Popa(cpopa1956***at***gmail.com)

Abstract: In constraining iterative processes, the algorithmic operator of the iterative process is pre-multiplied by a constraining operator at each iterative step. This enables the constrained algorithm, besides solving the original problem, also to find a solution that incorporates some prior knowledge about the solution. This approach has been useful in image restoration and other image processing situations when a single constraining operator was used. In the field of image reconstruction from projections a priori information about the original image, such as smoothness or that it belongs to a certain closed convex set, may be used to improve the reconstruction quality. We study here constraining of iterative processes by a family of operators rather than by a single operator.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment