## Monday, March 16, 2015

### DECOPT - DEcomposition Convex OPTimization - implementation -

Just noticed the following recent solver. From the main page:

## DECOPT - DEcomposition Convex OPTimization

### Overview

DECOPT is a MATLAB software package for solving the following generic constrained convex optimization problem:

minxRp,rRn{f(x)+g(r):Axr=b, lxu},  (CP)
where f and g are two proper, closed and convex functions, ARn×p, bRn and l,uRp are the lower and upper bounds of x.
Here, we assume that f and g are proximally tractable. By proximal tractability, we mean that the proximal operator proxφ of a given proper, closed and convex function φ:
proxφ(x):=argminyRp{φ(y)+(1/2)yx22}
is efficient to compute (e.g., by a closed form solution or by polynomial time algorithms).
DECOPT is implemented by Quoc Tran-Dinh at the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland. This is a joint work with Volkan Cevher at LIONS, EPFL.
DECOPT aims at solving (CP) for any convex functions f and g, where their proximal operator is provided. The following special cases have been customized in DECOPT:
• Basis pursuit:

minx{x1:Ax=b,lxu}.
• 1/2-unconstrained LASSO problem:

minx12Axb22+λx1
• 1/1-convex problem:

minxAxb1+λx1.
• Square-root 1/2 LASSO problem:

minxAxb2+λx1.
• 1/2 - the basis denosing (BPDN) problem:

minx{x1:Axb2δ}.
• 2/1 - the 1-constrained LASSO problem:

minx{(1/2)Axb22:x1δ}.
Here, λ>0 is a penalty parameter, δ>0 is a given level parameter.

And the references:

### References

The full algorithms and theory of DECOPT can be found in the following references:
• Q. Tran-Dinh and V. Cevher, "Constrained convex minimization via model-based excessive gap". Proceedings of the annual conference on Neural Information Processing Systems Foundation (NIPS), Montreal, Canada, (2014).
Available at: http://infoscience.epfl.ch/record/201842
Abstract: Abstract We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct first-order primal-dual methods with optimal convergence rates on the primal objec-tive residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox- center selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fast-gradient methods as special cases, where our rates apply.
• Q. Tran-Dinh, and V. Cevher. "A Primal-Dual Algorithmic Framework for Constrained Convex Minimization"LIONS Tech. Report EPFL-REPORT-199844, (2014).
Available at: http://infoscience.epfl.ch/record/199844
Abstract: We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.