Pages

Monday, May 07, 2012

Error Forgetting of Bregman Iteration (demo)

The following paper and attendant demo/code "explain why solving Bregman subproblems at low accuracies (1e-6) gives a Bregman solution at near the machine precision (1e-15)."


Error Forgetting of Bregman Iteration by Wotao Yin, Stanley Osher. The abstract reads:
Abstract This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax = b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x) + 12kAx  bkk22, where b k is iteratively updated. In practice, the subproblem at each iteration is solved at a relatively low accuracy. Let w k denote the error introduced by early stopping a subproblem solver at iteration k. We show that if all wk are su ciently small so that Bregman iteration enters the optimal face, then while on the optimal face, Bregman iteration enjoys an interesting error-forgetting property: the distance between the current point x k and the optimal solution set X is bounded by kw k+1 wkk, independent of the previous errors w k1; wk2; : : : ; w1. This property partially explains why the Bregman iterative procedure works well for sparse optimization and, in particular, for `1-minimization. The error-forgetting property is unique to J(x) that is a piece-wise linear function (also known as a polyhedral function), and the results of this article appear to be new to the literature of the augmented Lagrangian method.

The demo code is here.

No comments:

Post a Comment