Monday, May 19, 2008

CS: The KGG explanation of l0 and l1 correspondence without RIP, l_p minimization without a reference to RIP.

During the course of the L1 conference, Cassini was taking this picture and I was made aware of two new interesting papers: Extracting Salient Features From Less Data via l1-Minimization by Wotao Yin and Yin Zhang. The abstract reads:

MRI (magnetic resonance imaging) is a widely used medical imaging modality that creates an image from scanned data that are essentially the Fourier coefficients of this image. A typical abdominal scan may take around 90 minutes. Can we reduce this time to 30 minutes by using one third of the Fourier coefficients, while maintaining image quality? In this article, we hope to convince the reader that such reductions are achievable through a new and promising approach called compressive sensing (or compressed sensing). The main computational engine that drives compressive sensing is l1-related minimization algorithms.

The important part of this paper is located in paragraph "2. When are the l0- and l1-problems equivalent?" where the authors present a short and "new" explanation of the correspondence between l0 and l1 by using the results of Kashin, Garnaev and Gluskin (KGG) that does not rely on RIP.

The results of Chartrand showed that by reducing p to less than 1, one could obtain sparser representation in CS reconstructions. Here is an additional view on the issue: Sparsest solutions of underdetermined linear systems via -minimization for . by Simon Foucart, Ming-Jun Lai. The abstract reads:

We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal l_q-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the l_1-norm. We then introduce a simple numerical scheme to compute solutions with minimal l_q-quasinorm. Finally, we display the results of some experiments which indicate that this l_q-method performs better than all other available methods.

This is an interesting paper because it avoids the Restricted Isometry Constants by rightly noting
Our main theorem is similar in flavor to many previous ones - in fact, its proof is inspired by theirs - except that we avoid Restricted Isometry Constants, as we felt that the non- homogeneity of the Restricted Isometry Property (1) was in conflict with the equivalence of all the linear systems (cA)z = c y, c \el R.
A fact that hints on the adequacy of the Restricted Isometry Property, but we can come back to this later.

Image Credit: NASA/JPL/Space Science Institute, Dione as seen by Cassini on May 17 while the L1 conference was taking place.


Anonymous said...

I would like to mention related work about lp optimization with 0<=p<=1. The following papers cover

-equivalence of all l_p minimization problems when the data is sufficiently sparse to guarantee that l_1 will be succesfull

-provably superior performance of l_p minimization over l_1, for moderately sparse data and sufficiently small p

-stability to noise (see paper [3])

Best regards,


[1] R. Gribonval and M. Nielsen, On the strong uniqueness of highly sparse expansions from redundant dictionaries, Proc. Int Conf. Independent Component Analysis (ICA'04), September 2004.

[2] R. Gribonval and M. Nielsen, Highly sparse representations from dictionaries are unique and independent of the sparseness measure,Applied and Computational Harmonic Analysis, Vol. 22, No. 3, pp 335--355, May 2007.
[Technical report from October 2003].

[3] R. Figueras, P. Vandergheynst and R. Gribonval, A simple test to check the optimality of a sparse signal approximation , EURASIP Signal Processing, special issue on Sparse Approximations in Signal and Image Processing, vol. 86, no.3, march 2006, pp 496--510.

Igor said...


Thanks for pointing those out, I did not see those in the Rice pages, I'll mention them in an entry.