Pages

Monday, December 03, 2012

PDRank: Penalty Decomposition Methods for Rank Minimization (and more) - implementation -



Penalty Decomposition Methods for Rank Minimization by Zhaosong LuYong Zhang. The abstract reads:
In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first establish that a class of special rank minimization problems has closed-form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by the penalty decomposition methods satisfies the first-order optimality conditions of a nonlinear reformulation of the problems. Finally, we test the performance of our methods by applying them to the matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods are generally comparable or superior to the existing methods in terms of solution quality
The PDRank package can be dowloaded here. Also of interest, the PDSparse package is here.

Other papers from the same author include: Iterative Hard Thresholding Methods for l0 Regularized Convex Cone Programming by Zhaosong Lu. The abstract reads:

In this paper we consider l0 regularized convex cone programming problems. In particular, we first propose an iterative hard thresholding (IHT) method and its variant for solving l0 regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an -local-optimal solution. We then propose a method for solving l0 regularized convex cone programming by applying the IHT method to its quadratic penalty relaxation and establish its iteration complexity for fi nding an -approximate local minimizer. Finally, we propose a variant of this method in which the associated penalty parameter is dynamically updated, and show that every accumulation point is a local minimizer of the problem.
 

In this paper we study general lp regularized unconstrained minimization problems. In particular, we derive lower bounds for nonzero entries of rst- and second-order stationary points, and hence also of local minimizers of the lp minimization problems. We extend some existing iterative reweighted l1 (IRL1) and l2 (IRL2) minimization methods to solve these problems and proposed new variants for them in which each subproblem has a closed form solution. Also, we provide a uni ed convergence analysis for these methods. In addition, we propose a novel Lipschitz continuous -approximation to kxkpp. Using this result, we develop new IRL1 methods for the lp minimization problems and showed that any accumulation point of the sequence generated by these methods is a rst-order stationary point, provided that the approximation parameter is below a computable threshold value. This is a remarkable result since all existing iterative reweighted minimization methods require that be dynamically updated and approach zero. Our computational results demonstrate that the new IRL1 method is generally more stable than the existing IRL1 methods [21, 18] in terms of objective function value and CPU time.






Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment