Pages

Thursday, October 10, 2013

Phase Retrieval: Covariance Estimation, Sparse Signals

Here are some recent phase retrieval papers:



Statistical inference and information processing of high-dimensional data streams and random processes often require efficient and accurate estimation of their second-order statistics. With rapidly changing data and limited storage, it is desirable to extract the covariance structure from a single pass over the data stream. In this paper, we explore a quadratic random sampling model which imposes minimal memory requirement and low computational complexity during the sampling process, and are shown to be optimal in preserving low-dimensional covariance structures. Specifically, two popular structural assumptions of covariance matrices, namely sparsity and low rank, are investigated. We show that a covariance matrix with either structure can be perfectly recovered from a minimal number of sub-Gaussian quadratic measurements, via efficient convex relaxation for respective structure.
The proposed convex optimization algorithm has a variety of potential applications in large-scale data stream processing, high-frequency wireless communication, phase space tomography in optics, non-coherent subspace detection, etc. By introducing a novel notion of mixed-norm restricted isometry property (RIP-$\ell_{2}/\ell_{1}$), we show that our method admits accurate and universal recovery in the absence of noise, as soon as the number of measurements exceeds the theoretic sampling limits. We also show this approach is robust to noise and imperfect structural assumptions, i.e. it admits high-accuracy recovery even when the covariance matrix is only approximately low-rank or sparse. Our methods are inspired by the recent breakthroughs in phase retrieval, and the analysis framework herein recovers and improves upon best-known phase retrieval guarantees with simpler proofs.

We consider the problem of sparse phase retrieval, where a $k$-sparse signal ${\bf x} \in {\mathbb R}^n \textrm{(or} {\mathbb C}^n\textrm{)}$ is measured as ${\bf y} = |{\bf Ax}|,$ where ${\bf A} \in {\mathbb R}^{m \times n} \textrm{(or} {\mathbb C}^{m \times n}\textrm{respectively)}$ is a measurement matrix and $|\cdot|$ is the element-wise absolute value. For a real signal and a real measurement matrix ${\bf A}$, we show that $m = 2k$ measurements are necessary and sufficient to recover ${\bf x}$ uniquely. For complex signal ${\bf x} \in {\mathbb C}^n$ and ${\bf A} \in {\mathbb C}^{m \times n}$, we show that $m = 4k-1$ phaseless measurements are sufficient to recover ${\bf x}$. It is known that the multiplying constant 4 in $m = 4k-1$ cannot be improved.

The aim of this paper is to build up the theoretical framework for the recovery of sparse signals from the magnitude of the measurement. We first investigate the minimal number of measurements for the success of the recovery of sparse signals without the phase information. We completely settle the minimality question for the real case and give a lower bound for the complex case. We then study the recovery performance of the $\ell_1$ minimization. In particular, we present the null space property which, to our knowledge, is the first sufficient and necessary condition for the success of $\ell_1$ minimization for $k$-sparse phase retrievable.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment