Thursday, May 29, 2008

CS: Q&A on the Restricted Isometry Property for Compressed Sensing Recovery.

Since the Restricted Isometry Property of measurement matrices in Compressed Sensing is such a big deal (even though it is just a sufficient condition) and since the Structurally Random Matrices introduced earlier include a wide variety of previously studied measurement matrices, I went on and asked Thong Do the following question:

Are there any results stating that your structurally random matrices have the Restricted Isometry Property in general?

Thong Do kindly responded with the following:


... The issue of RIP of a random subset of rows of an orthonormal matrix (a generalization of Partial Fourier) was studied by M. Rudelson and R. Vershynin in the paper "On sparse reconstruction from Fourier and Gaussian measurements" [Note from Igor: Comment on that paper is here, also useful are the related slide presentations: Lecture 3: Random Fourier matrices and Commentary and Bibliography]. There is RIP for this family of matrices but it is not as good as RIP of Gaussian matrices. In other words, if we use this family of matrices for sensing, it would require O(Klog^4(n)) rather than O(Klog(n)) measurements for the so called uniformly exact recovery (as in the sense guaranteed by RIP). If we only want non-uniformly exact recovery, then Candes showed in his paper "Sparsity and Incoherence in compressive sampling" that it only requires to have O(Klog(n)) measurements. In other words, O(Klog(n)) of Gaussian measurements guarantees uniform exact recovery while the same number of measurements from a random subset of rows of an orthonormal matrix will only guarantee a non-uniform exact recovery. So what are the uniform and nonuniform businesses ? Roughly speaking, it's like to say a Gaussian matrix would work for all signals (with high probability) while a Partial Fourier only work for a GIVEN signal (with high probability).

I'm sure that SRM has RIP as good as a subset of rows of an orthonormal matrix but have not been sure if SRM's is better and how much better...
I also am receiving other questions that are generally not answered straightforwardly in the papers I have read. I wish they were addressed at some basic level. Here are some:

1) if y= phi x where x is the sparse signal and phi the sensing matrix size k*N; k less than N; do the columns of phi need to be normalized to 1 for RIP to imply perfect recovery?

2) In some papers y= Ux = phi psi x is the formulation (x is sparse). In this case the RIP is applicable for U right? not just for phi?
I think these questions can be clarified by reading Emmanuel Candes and Justin Romberg's paper entitled Sparsity and Incoherence in Compressive Sampling.

However, the short answer to the first question is yes some normalization is in order otherwise bounds for some of the inequalities for the concentration of measure will be off. The short answer to the second question is yes. The short answer to the third question is: the reason you think you need phi to have the RIP is because in this case psi is the identity. Eventually only U counts, phi then only needs to be incoherent with psi.

With regards to checking the RIP of a specific matrix, here is another question:
If i want to check the Restricted Isometry property for measurement matrix F, it should satisfy for all S-sparse vectors x, the following relationship,



In the paper of Candes Decoding by Linear Programming,its written that if above condition is satisfied, all eigenvalues of (F*F),where F* is the Hermitian of F,should lie between and .

I have a question regarding eigenvalues of F*F.I have checked there may exist F which satsifies the above Restricted Isometry for all S sparse vectors but still the eigenvalues of F*F may not lie between and ?


then an example is given:
A simple example,length of sparse vector=N=5
number of measured samples=M=4
Sparsity=S=2

Creating Partial fourier matrix(without randomizing the rows) and normalizing its columns
F=dftmtx(N);
F=normalize(F(1:M,:));
I have sparse vector x
x=[0 0 0 0.6820 0.5253];

The RIP condition



is satisfied for delta=0.11()

But if we see the eigen values of matrix F*F are[0.75,1.25] which does not lie between . Hence even if for some sparse vector x,the RIP condition is satisfied but it doesnt imply about the bounds on eigenvalues.


The short answer is: the delta you computed is for one sparse vector, not for all sparse vectors. In turn, the eigenvalues of the matrix still give a lower bound for the RIP constant.

But....but how do you find the RIP constant ? well you need to try all the sparse vectors, this is why this is a combinatorial search. This is why the possibility of finding this RIP constant using Sparse PCA raises some interest. This is also why, if it is indeed NP-hard, it needs to be understood that the RIP is only a sufficient condition, that there are other constructions (Combining Geometry And Combinatorics: A Unified Approach to Sparse Signal Recovery by Radu Berinde, Anna Gilbert, Piotr Indyk, Howard Karloff and Martin Strauss) that are not RIP(2) but RIP(1) yet allow for full recovery. In my opinion, the work of Jared Tanner (Phase Transition Phenomenon in Sparse Approximation) also point to other potential constructions that work.

If any of these answers are wrong, please go ahead and comment.

Credit: NASA/JPL/University of Arizona, Phoenix Lander as seen by the Hirise Camera on-board the orbiting Mars Reconnaissance Observer after landing.

7 comments:

Anonymous said...

i am confused with notations in some papers used for sparsity transform and its inverse.
Suppose signal d is sparse in psi domain such that

x= psi d

Hence x is a sparse signal.

With above notation,
y=phi inv(psi) x

In this case, RIP hypothesis should be tested for

phi*inv(psi) instead of phi*psi.

is this correct?

Unknown said...

I am interested in the eigenvalue metric eig(F*F)for determining the bounds for the RIP condition, described in Candes' paper.

How is F normalized in this case? It seems that the eigenvalues will change depending on how F is normalized.

Unknown said...

I ask about the normalization because someone gave the code:
N = 10;
M = 5;
F=dftmtx(N);
Fnorm=normalize(F(1:M,:))
eig(Fnorm'*Fnorm)

but he did not specify how the normalization was done. When I executed this code, I normalized F so that the euclidean norm of each column was 1. Then I saw eigenvalues ranging from 0 to 2, instead of .75 to 1.25, as the writer claimed. What are we doing differently?

The reason I ask is I want a handy way to check for the RIP criterion given an arbitrary matrix.

This is a really insightful site, by the way.
thanks

Unknown said...

I think I see now -- is it that we are only interested in the eigenvalues of certain subsets of the columns of F, and not of the whole matrix F?

Anonymous said...

Does RIP only hold/apply for L1 minimization?
(i.e. does it apply to TV-minimization as well?)

Anonymous said...

Hi can you please post the procedure to find whether a given random matrix satisfies Restricted Isometry property.

Anonymous said...

Hello community,

I have read some paper on the introduction of compressed sensing and it is quite often for me to found the following statements, which I do not yet understand.

Suppose A is a matrix of size MxN that satisfies RIP of order K, then A satisfies RIP of order l ≤ k.

Could someone help explain why this is true, or give me a short proof on this. Also I am curious if the reverse of this statement is true, i.e. if I know that a matrix satisfies an RIP of order l, can I conclude that this matrix satisfies RIP of order k>l?

Thank you.

Printfriendly