This a little late, but as I was updating the Big Picture, I realized that I missed listing an implementation of a compressive sensing solver based on l_0 and a similar rank minimization solver. This is another cue that sparse signal recovery and matrix factorization share many insights. So without further due, here are the papers and attendant implementations.
Penalty Decomposition Methods for $L0$-Norm Minimization by Zhaosong Lu and Yong Zhang. The abstract reads:
In this paper we consider general l0-norm minimization problems, that is, the problems with l0-norm appearing in either objective function or constraint. In particular, we ﬁrst reformulate the l0-norm constrained problem as an equivalent rank minimization problem and then apply the penalty decomposition (PD) method proposed in  to solve the latter problem. By utilizing the special structures, we then transform all matrix operations of this method to vector operations and obtain a PD method that only involves vector operations. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD method satisﬁes a ﬁrst-order optimality condition that is generally stronger than one natural optimality condition. We further extend the PD method to solve the problem with the l0-norm appearing in objective function. Finally, we test the performance of our PD methods by applying them to compressed sensing, sparse logistic regression and sparse inverse covariance selection. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed.
The attendant MATLAB code is here.
Penalty Decomposition Methods for Rank Minimization by Zhaosong Lu and Yong Zhang. The abstract reads:
The attendant MATLAB code is here.In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We ﬁrst show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this result, we then propose penalty decomposition methods for general rank minimization problems in which each subproblem is solved by a block coordinate descend method. Under some suitable assumptions, we show that any accumulation point of the sequence generated by our method when applied to the rank constrained minimization problem is a stationary point of a nonlinear reformulation of the problem. Finally, we test the performance of our methods by applying them to matrix completion and nearest low-rank correlation matrix problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed.
Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from