Just noticed the following recent solver. From the main page:

## DECOPT - DEcomposition Convex OPTimization

## Overview

DECOPTis a MATLAB software package for solving the followingproblem:generic constrained convex optimization

minx∈Rp,r∈Rn{f(x)+g(r):Ax−r=b, l≤x≤u}, (CP) wheref andg are two proper, closed and convex functions,A∈Rn×p ,b∈Rn andl,u∈Rp are the lower and upper bounds ofx .Here, we assume thatf andg areBy proximal tractability, we mean that the proximal operatorproximally tractable.proxφ of a given proper, closed and convex functionφ :proxφ(x):=argminy∈Rp{φ(y)+(1/2)∥y−x∥22} is efficient to compute (e.g., by a closed form solution or by polynomial time algorithms).DECOPTis implemented byQuoc Tran-Dinhat the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland. This is a joint work withVolkan Cevherat LIONS, EPFL.DECOPTaims at solving (CP) for any convex functionsf andg , where their proximal operator is provided. The following special cases have been customized inDECOPT:

- Basis pursuit:

minx{∥x∥1:Ax=b,l≤x≤u}.

ℓ1/ℓ2 -unconstrained LASSO problem:

minx12∥Ax−b∥22+λ∥x∥1

ℓ1/ℓ1 -convex problem:

minx∥Ax−b∥1+λ∥x∥1.

- Square-root
ℓ1/ℓ2 LASSO problem:

minx∥Ax−b∥2+λ∥x∥1.

ℓ1/ℓ2 - the basis denosing (BPDN) problem:

minx{∥x∥1:∥Ax−b∥2≤δ}.

ℓ2/ℓ1 - theℓ1 -constrained LASSO problem:minx{(1/2)∥Ax−b∥22:∥x∥1≤δ}. Here,λ>0 is a penalty parameter,δ>0 is a given level parameter.

And the references:

## References

The full algorithms and theory of DECOPT can be found in the following references:

- Q. Tran-Dinh and V. Cevher, "
Constrained convex minimization via model-based excessive gap".Proceedings of the annual conference on Neural Information Processing Systems Foundation(NIPS), Montreal, Canada, (2014).Available at:http://infoscience.epfl.ch/record/201842

Abstract:Abstract We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct first-order primal-dual methods with optimal convergence rates on the primal objec-tive residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox- center selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fast-gradient methods as special cases, where our rates apply.

- Q. Tran-Dinh, and V. Cevher. "
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization",LIONS Tech. Report EPFL-REPORT-199844, (2014).Available at:http://infoscience.epfl.ch/record/199844

Abstract:We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment