Thursday, January 08, 2009

CS: Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property

Here is a new addition at the Rice repository by Robert Calderbank, Stephen Howard and Sina Jafarpour entitled Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property. The abstract reads:
Compressed Sensing aims to capture attributes of a signal using very few measurements. The Restricted Isometry Property is the condition that the sensing matrix acts as as near isometry on all k-sparse signals. Cand`es and Tao showed that this condition is sufficient for sparse reconstruction and that random matrices, where the entries are generated by an iid Gaussian or Bernoulli process, satisfy the RIP with high probability. This approach treats all k-sparse signals equally likely, in contrast to mainstream signal processing where the filtering is deterministic, and the signal is described probabilistically. In the mainstream framework the sensing matrix is deterministic and it is required to act as a near-isometry on k-sparse vectors with high probability. This paper provides weak conditions that are sufficient to show that a deterministic sensing matrix satisfies this Statistical Restricted Isometry Property (STRIP). The proof is elementary and avoids intricate combinatorial arguments involving coherence of orthonormal bases. The new framework encompases many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and Extended BCH codes. It is resilient to noise, and generalizes to k-compressible signals, where only k entries are significant, and the magnitude of all remaining entries is close to zero.

As some of you have guessed I am interested in the engineering perspective where one is given a measurement matrix by the physics and the challenge is to find out if that measurement system can fit into an underdetermined system satisfying some property where Compressive Sensing apply. After reading this paper, I went ahead and ask dumb questions to Sina Jafarpour, one of the authors:
Question:
....My question entails the verification that a particular matrix satisfies the STRIP sufficient condition as featured in your Th 2.4:

"1) The columns of Φ form a group UC under pointwise multiplication.
2) the rows of Φ are orthogonal, and all row sums are equal to zero."

Before doing any type of reconstruction or acquiring any signal with it,

* the verification of item 2 means O(N^3) or more operations, as it could evaluated through an SVD.

* for item 1, since a group is a set that is closed under a binary associative operation, contains an identity element, and has an inverse for every element. I am guessing it's a also a O(C^2) maybe even O(C^3) operations but I may be mistaken.

Sina kindly responded with:
...That's generally true, if I give you a (possibly new) matrix you need this amount of calculation to figure out if the conditions for Th 2.4 holds. But if the matrix (or better to say family of matrices) comes with a well-definied structure (like chirps, 2nd order reed-mullers etc) there is no need to check, the structure guarantees that the matrix satisfies conditions of Th 2.4, and I believe the main purpose of StRIP is to finish the proof of recovery of those matrices.

Igor here, I realize there is a need for known matrices like chirps, 2nd order reed-mullers to fit a sufficient condition however, I'd like to think it could be used for other, not yet found families of measurement matrices. This new condition is very attractive compared to the NP-hard RIP. I also wonder how the set of families that is StRIP fare with the families satisfying the JN condition. Thanks Sina!

4 comments:

Anonymous said...

the link to the paper is dead :-(

Igor said...

It's fixed now. Rice recently changed their web address system for the paper.

Thanks for the heads-up.

Igor.

Manjish said...

Hello Igor,
Thanks for this very detailed analysis of the paper on Strip.
I will explain in detail my problem and I would appreciate it quite a lot if you could help me.

I have a Sensing matrix which is deterministic and the columns contains certain functions like sin(x(t)), x(t)^n, y(t)^n etc. The rows contain the time series data for these like x(t1).. x(t2) etc

I have the time series data for this and hence my sensing matrix is deterministic. But it obviously doesn't follow any known sequence like Reed Muller or so. Also its not random.

I normalize the columns of this sensing matrix in the hopes that it will satisfy the RIP.

But still, I am unable to reproduce the original sparse signal from this sensing matrix. I am trying to understand why this is so. Hence I want to try to test the StRIP or RIP for this matrix. I have no clue on how to do it numerically.

Any help is much appreciated.
Thanks

Manjish said...

Hello Igor,
Thanks for this very detailed analysis of the paper on Strip.
I will explain in detail my problem and I would appreciate it quite a lot if you could help me.

I have a Sensing matrix which is deterministic and the columns contains certain functions like sin(x(t)), x(t)^n, y(t)^n etc. The rows contain the time series data for these like x(t1).. x(t2) etc

I have the time series data for this and hence my sensing matrix is deterministic. But it obviously doesn't follow any known sequence like Reed Muller or so. Also its not random.

I normalize the columns of this sensing matrix in the hopes that it will satisfy the RIP.

But still, I am unable to reproduce the original sparse signal from this sensing matrix. I am trying to understand why this is so. Hence I want to try to test the StRIP or RIP for this matrix. I have no clue on how to do it numerically.

Any help is much appreciated.
Thanks

Printfriendly