Pages

Tuesday, October 25, 2011

Newer Versions: Verifiable and computable performance analysis of sparsity recovery, Subspace Methods for Joint Sparse Recovery

Today, I am featuring several papers that have gone through several versions and may not look like "new" but in fact they are. I am very happy to be able to do this because being able to ping versioning on arxiv is non-existent and so it is difficult for me to figure out when something new has occurred in a new version of a preprint. Thanks to the authors for letting me know about these new and better versions.

Gongguo Tang let me know of two papers that evaluate performance of sensing systems (i.e. measurement matrices) associated with block sparsity recovery. Gongguo also mentions on his website to feel free to email him if you'd like to try the codes featured in the papers:

In this paper, we employ fixed point theory and semidefinite programming to compute the performance bounds on convex block-sparsity recovery algorithms. As a prerequisite for optimal sensing matrix design, a computable performance bound would open doors for wide applications in sensor arrays, radar, DNA microarrays, and many other areas where block-sparsity arises naturally. We define a family of goodness measures for arbitrary sensing matrices as the optimal values of certain optimization problems. The reconstruction errors of convex recovery algorithms are bounded in terms of these goodness measures. We demonstrate that as long as the number of measurements is relatively large, these goodness measures are bounded away from zero for a large class of random sensing matrices, a result parallel to the probabilistic analysis of the block restricted isometry property. As the primary contribution of this work, we associate the goodness measures with the fixed points of functions defined by a series of semidefinite programs. This relation with fixed point theory yields efficient algorithms with global convergence guarantees to compute the goodness measures.


In this paper, we develop verifiable and computable performance analysis of sparsity recovery. We define a family of goodness measures for arbitrary sensing matrices as a set of optimization problems, and design algorithms with a theoretical global convergence guarantee to compute these goodness measures. The proposed algorithms solve a series of second-order cone programs, or linear programs. As a by-product, we implement an efficient algorithm to verify a sufficient condition for exact sparsity recovery in the noise-free case. We derive performance bounds on the recovery errors in terms of these goodness measures. We also analytically demonstrate that the developed goodness measures are non-degenerate for a large class of random sensing matrices, as long as the number of measurements is relatively large. Numerical experiments show that, compared with the restricted isometry based performance bounds, our error bounds apply to a wider range of problems and are tighter, when the sparsity levels of the signals are relatively low.

Kiryung Lee and Yoram Bresler sent me the following:

Dear Igor,
We would like to share our preprint on joint sparse recovery over "Nuit Blanche".
http://arxiv.org/abs/1004.3071
The main idea is to improve the work by Ping Feng and Yoram Bresler in 1997,
the first work that showed the link between sensor array processing and compressed sensing.
We proposed the main ideas on arxiv about an year and half ago, but we would like to present improvements in both algorithms and analysis. We would appreciate your comments, suggestions, or questions.
Thanks so much.
Best regards,
Kiryung Lee and Yoram Bresler

We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.
Kiryung let me know that an implementation of SA-MUSIC should be available shortly. Stay tuned.


Image Credit: NASA/JPL/Space Science Institute
N00177486.jpg was taken on October 21, 2011 and received on Earth October 21, 2011. The camera was pointing toward LOGE, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment