Let A be an M by N matrix (M less than N) which is an instance of a real random Gaussian ensemble. In compressed sensing we are interested in finding the sparsest solution to the system of equations A x = y for a given y. In general, whenever the sparsity of x is smaller than half the dimension of y then with overwhelming probability over A the sparsest solution is unique and can be found by an exhaustive search over x with an exponential time complexity for any y. The recent work of Candes, Donoho, and Tao shows that minimization of the L_1 norm of x subject to A x = y results in the sparsest solution provided the sparsity of x, say K, is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of y approaches the dimension of x, the sparsity of x should be . Here, we consider the case where x is d-block sparse, i.e., x consists of n = N / d blocks where each block is either a zero vector or a nonzero vector. Instead of L_1-norm relaxation, we consider the following relaxation , subject to where for i = 1,2, ..., N. Our main result is that as , the minimization finds the sparsest solution to Ax = y, with overwhelming probability in A, for any x whose block sparsity is , provided , and . The relaxation can be solved in polynomial time using semi-definite programming.
Page Views on Nuit Blanche since July 2010
Nuit Blanche community
@NuitBlog || Facebook || Reddit
Compressive Sensing on LinkedIn
Advanced Matrix Factorization on Linkedin ||
Thursday, April 03, 2008
Compressed Sensing: On the reconstruction of block-sparse signals with an optimal number of measurements
Here is a new paper from Arxiv.org where Mihailo Stojnic, Farzad Parvareshand Babak Hassibi evaluate On the reconstruction of block-sparse signals with an optimal number of measurements using an L_2 minimization instead of an L_1 based. The abstract reads:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment