Friday, November 02, 2012

Compressive Phase Retrieval via Generalized Approximate Message Passing - implementation -

Following yesterday's entry about CPRL, Phil Schniter sent me the following:

Hi Igor,

Since you seem to be blogging about phase-retrieval lately, your readers may be interested in our GAMP approach from Allerton 2012:

Note that tables III-IV suggest that PR-GAMP has a much better phase transition than CPRL.  we used the CVX implementation of CPRL that was available at the time we wrote the paper.  Because it was slow, our experiments with CPRL were very limited.  I plan to try the new ADMM implementation soon.

As for the theory of sparse phase-retrieval, your readers may be interest in the recent work


Thanks Phil  

In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PR-GAMP algorithm has excellent phase-transition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic Bernoulli-circular-Gaussian signals, PRGAMP requires ≈ 4 times the number of measurements as a phase-oracle version of GAMP and, at moderate to large SNR, the NMSE of PR-GAMP is only ≈ 3 dB worse than that of phase-oracle GAMP. A comparison to the recently proposed convex-relation approach known as “CPRL” reveals PR-GAMP’s superior phase transition and orders-of-magnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65k-pixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PR-GAMP runtime of only 13.4 seconds.

Phil also tells me that PR-GAMP is part of the GAMP Matlab package.

In this paper we consider a system of quadratic equations |z_j, x|^2 = b_j, j = 1, ..., m, where x in R^n is unknown while normal random vectors z_j in R_n and quadratic measurements b_j in R are known. The system is assumed to be underdetermined, i.e., m \lt n. We prove that if there exists a sparse solution x, i.e., at most k components of x are non-zero, then by solving a convex optimization program, we can solve for x up to a multiplicative constant with high probability, provided that k \le O((m/log n)^(1/2)). On the other hand, we prove that k <= O(log n (m)^(1/2)) is necessary for a class of naive convex relaxations to be exact.

No comments: