Pages

Tuesday, April 29, 2008

CS: Structurally Random Matrices, Combining Geometry And Combinatorics, Alternating l1 Relaxation, Fast Random Projections using Lean Walsh Transforms

Today is a good day for understanding the different measurements means in Compressed Sensing. We have one paper showing us some sort of improvement over current ad-hoc measurements like partial Fourier Transforms and we also have another paper that makes the bridge between geometric and combinatorial measurement matrices. I will update the measurement section of the big picture accordingly.

Trac Tran released his ICASSP presentation and associated software. It is here (thanks to the Rice site for the pointer). In the compressed file, one can find the paper entitled: Fast Compressive Sampling With Structurally Random Matrices by Thong T. Do, Trac Tran and Lu Gan

This paper presents a novel framework of fast and efficient compressive sampling based on the new concept of structurally random matrices. The proposed framework provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal. (iii) It has very low complexity and fast computation based on block processing and linear filtering. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction. All currently existing methods only have at most three out of these four highly desired features. Simulation results with several interesting structurally random matrices under various practical settings are also presented to verify the validity of the theory as well as to illustrate the promising potential of the proposed
framework.

As pointed out in the illuminating presentation slides, here are the issues:

Hence, they seem to use an approach is a little reminiscent of the three stage PHD approach of Nir Ailon and Bernard Chazelle, and Edo Liberty when implementing a Fast Johnson-Lindenstrauss transform (Monday Morning Algorithm Part 6: The Fast Johnson-Lindenstrauss Transform). This approach is also ideally conducive to the operator approach implemented in Sparco.


The graph above shows a typical Structural Random Matrice but the slides have other combinations leading to a family of Sparse Structurally Random Matrices. I also note that the author is looking forward to deterministic compressive sampling by replacing the random downsampler by a deterministic downsampler or a deterministic lattice of measurements such as the one by Yves Meyer and Basarab Matei.


Do you recall this previous entry: Compressed Sensing: There is no theory behind it right now, but does it work ? well it looks like there really is a theory behind it. Radu Berinde, Anna Gilbert, Piotr Indyk, Howard Karloff and Martin Strauss just released the important: Combining Geometry And Combinatorics: A Unified Approach to Sparse Signal Recovery ( but also here). The abstract reads:
There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix  and then uses linear programming to decode information about x from x. The combinatorial approach constructs  and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p  close to 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.
It's a dense paper, I had to read it several times but it has some far reaching consequences. From the conclusion of the paper (emphasis are mine):
We show in this paper that the geometric and the combinatorial approaches to sparse signal recovery are different manifestations of a common underlying phenomenon. Thus, we are able to show a unified perspective on both approaches—the key unifiying elements are the adjacency matrices of unbalanced expanders.
In most of the recent applications of compressed sensing, a physical device instantiates the measurement of x and, as such, these applications need measurement matrices which are conducive to physical measurement processes. This paper shows that there is another, quite different, large, natural class of measurement matrices, combined with the same (or similar) recovery algorithms for sparse signal approximation. These measurement matrices may or may not be conducive to physical measurement processes but they are quite amenable to computational or digital signal measurement. Our work suggests a number of applications in digital or computational “sensing” such as efficient numerical linear algebra and network coding.
The preliminary experimental analysis exhibits interesting high-dimensional geometric phenomena as well. Our results suggest that the projection of polytopes under Gaussian random matrices is similar to that of projection by sparse random matrices, despite the fact that Gaussian random matrices are quite different from sparse ones.


One now wonders how these two papers are going to overlap, could we use some of the matrices designed in the second paper as global/local randomizers in the first paper?

Following up on a previous entry, Stephane Chretien, now has a preprint of An Alternating l_1 Relaxation for compressed sensing.

Finally, in line with the first two papers, here is one by Edo Liberty, Nir Ailon, Amit Singer entitled Fast Random Projections using Lean Walsh Transforms. The abstract reads:

We present a k x d random projection matrix that is applicable to vectors x element of R^d in O(d) operations if d superior or equal to k^2(k+\delta') . Here, k is the minimal Johnson Lindenstrauss dimension and \delta' is arbitrarily small. The projection succeeds, with probability 1 - 1/n, in preserving vector lengths, up to distortion \epsilon, for all vectors such that ||x||_infty inferior or equal to ||x||_2 k^(1/2) d^(-\delta)(for arbitrary small \delta). Sampling based approaches are either not applicable in linear time or require a bound on ||x||_infty that is strongly dependant on d. Our method overcomes these shortcomings by rapidly applying dense tensor power matrices to incoming vectors.

Of noted interest is the following comment when dealing with dimension higher than 3:

These matrices are constructed using tensor products and can be applied to any vector in Rd in linear time, i.e, in O(d).


Credit: NASA/JPL/Space Science Institute, the Sun shines behind Saturn, a view we can never see from Earth. Released: October 11, 2006

No comments:

Post a Comment