Pages

Friday, October 03, 2008

CS: Convex Iteration Reconstruction Technique, Gradient based method for cone programming with application to large-scale compressed sensing

Today, we have two new reconstruction techniques with their attendant Matlab codes. The first algorithm seems to be fast and doing better in terms of the number of CS measurements needed to reconstruct a known benchmark, the second algorithm seems to be very fast compared to Interior Point algorithms.

Jon Dattorro forwarded me an excerpt of his book on Convex Optimization [1] where he described another reconstruction algorithm he calls Convex Iteration. The part of interest is located in Chapter 4 page 306 to 314. Jon also mentioned to me:

The most salient results of the paper comprise the Convex Iteration method for minimizing cardinality, and a new lower bound of 2% on image-gradient cardinality for the 256x256 Shepp-Logan phantom from the Matlab Image Processing Toolbox. Previously believed to be 3%, this new lower bound is a consequence of an adaptive strategy for computing image-gradient; specifically, the algorithm assumes that number of pixels (between 1 and 4), involved in an image-gradient estimation, is variable.


The Matlab code that realizes the algorithm (available here) is exceedingly fast by comparison with other contemporary methods. It runs on a laptop in a Matlab minute; its speed due, in large part, to Josh Trzasko at Mayo Clinic.

I very much like Jon's basic explanations in his book. He previously gave an earlier similar clear explanation in "Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction".


In this paper, we study a gradient based method for general cone programming (CP) problems. In particular, we first consider four natural primal-dual convex smooth minimization reformulations for them, and then discuss a variant of Nesterov’s smooth (VNS) method recently proposed by Tseng [30] for solving these reformulations. The associated worst-case major arithmetic operations costs of the VNS method for them are estimated and compared. We show that for a class of CP problems, the VNS method based on the last reformulation generally outperforms that applied to the others. Finally, we discuss the application of the VNS method [30] to some large-scale CP problems arising in compressed sensing, which are highly challenging to simplex and/or interior point (IP) methods. The performance of this method is compared with the IP method [4] that is specially implemented for these problems. Our computational results demonstrate that for these CP problems, the VNS method [30] applied to the mostly suitable reformulation mentioned above substantially outperforms the IP method [4].


The attendant Matlab source codes are here.


These algorithms as well as those mentioned yesterday are listed in the reconstruction section of the Big Picture.


Reference: [1] Convex Optimization & Euclidean Distance Geometry by Jon Dattorro

No comments:

Post a Comment