Abstract This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax = b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x) + 12kAx bkk22, where b k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all wk are su ciently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point x k and the optimal solution set X is bounded by kw k+1 wkk, independent of the previous errors w k1; wk2; : : : ; w1. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for `1-minimization. The error-forgetting property is unique to J(x) that is a piece-wise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method.
The demo code is here.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment