Thursday, May 29, 2014

On the Convergence of Approximate Message Passing with Arbitrary Matrices

This discussion on LinkedIn reminded me how late I am on certain things to feature here on Nuit Blanche. For short, we know that AMP algorithms are pretty good in terms of speed and in terms of pushing the sharp phase transition into new territories.


What we don't really understand is why it works well in some cases and why it absolutely fails in others. For instance, I am aware of one instance of Uncertainty Quantification where it really is bad compared to a "simpler" convex L1 minimization approach.


Anyway, here is a new stab at the problem that might help design the right parameters within AMP to fit to specific measurement matrices.



Approximate message passing (AMP) methods and their variants have attracted considerable recent attention for the problem of estimating a random vector x observed through a linear transform A. In the case of large i.i.d. A, the methods exhibit fast convergence with precise analytic characterizations on the algorithm behavior. However, the convergence of AMP under general transforms is not fully understood. In this paper, we provide sufficient conditions for the convergence of a damped version of the generalized AMP (GAMP) algorithm in the case of Gaussian distributions. It is shown that, with sufficient damping the algorithm can be guaranteed to converge, but the amount of damping grows with peak-to-average ratio of the squared singular values of A. This condition explains the good performance of AMP methods on i.i.d. matrices, but also their difficulties with other classes of transforms. A related sufficient condition is then derived for the local stability of the damped GAMP method under more general (possibly non-Gaussian) distributions, assuming certain strict convexity conditions.
I note from the conclusion:

 The required amount of damping is related to the peak-to-average ratio of the squared singular values of the transform matrix. However, much remains unanswered: Most importantly, we have yet to derive a condition for global convergence even in the case of strictly convex functions. Secondly, our analysis assumes the use of fixed stepsizes. Third, short of computing the peak-to-average singular-value ratio, we proposed no method to compute the damping constants. Hence, an adaptive method may be useful in practice.



Image Credit: NASA/JPL/Space Science Institute, N00224160.jpg was taken on May 22, 2014 and received on Earth May 24, 2014. The camera was pointing toward RHEA at approximately 1,201,511 miles (1,933,644 kilometers) away, and the image was taken using the CL1 and CL2 filters. 


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly