Friday, May 23, 2008

CS: New CS Measurement Matrices, Some blog entries, Failing Einstein and Coded Apertures

Following up on the meeting at Texas A&M on Nonlinear Approximation Techniques Using L1, I noted two new compressed measurement schemes. In Justin Romberg's presentation entitled Architectures for Compressive Sampling one could note a new scheme called Random Convolution.
That scheme is efficiently implemented in a new camera system he is implementing whereby elements between lenses will change the phase of the incoming light.
In Enhanced Compressive Sensing and More, Yin Zhang wants to avoid the issue of storage of large random matrices and devises a scheme that reduces tremendously the storage count by using a kronecker/tensor product of two smaller sized random matrices.

I'll be adding those to the list of measurement matrices in the big picture page.

On a different note, James Lee has a new entry of relevance to Compressed Sensing entitled The pseudorandom subspace problem

Mark Iwen will defend his thesis on July 28th. Good luck Mark.

Here is a heartbreaker, after forty nine years in the making, NASA is probably ready to pull the plug on Gravity Probe B after the bad grade it just obtained. After four years of flight, they could not do better than the experiment using the retro-reflectors left on the Moon during the Apollo 11, 14 and 15 missions. Which brings me to another subject that came up during the L1 meeting. Somebody asked me about coded aperture cameras and their use in astronomy. The reasons they are used comes from the fact that photons outside of the visible wavelength (roughly) do not bend when going through matter like lenses. For this reason, people have been using coded aperture camera to figure out where specific event where originating in the sky. For more one can read these entries [1] and [2]. In the NASA review that gave a very bad grade to Gravity Probe B, there are a number of missions that have coded aperture cameras:

1. Swift, launched in 2004, has the
Burst Alert Telescope (BAT), see the mask on the side.

6. WMAP (Wilkinson Microwave Anisotropy Probe), launched in 2001, does have a coded aperture per se but rather a "cruder system" as shown here.

8. Integral (INTErnational Gamma-Ray Astrophysics Laboratory), launched in 2002, has three coded aperture cameras:

The spectrometer (SPI), it has an hexagonal coded aperture mask.

The IBIS Imager and the Joint European X-Ray Monitor (JEM-X) (mask is here)

Laurent Duval mentioned to me a call for paper on Wavelet Applications in Industrial Processing VI for the SPIE meeting in January meeting in San Diego. The important dates are:

Abstract (500 words) Due Date: 16 June 2008
Final Summary (200 words) Due Date: 17 November 2008
Manuscript Due Date: 22 December 2008

[1] Compressed Sensing: Thoughts on Coded Aperture and Compressed Sensing
[2] Compressed Sensing: Compressive Coded Aperture Superresolution Image Reconstruction, TOMBO, CASSI


Anonymous said...

It's very interesting that you mention 2 new CS measurement matrices. Actually, these two matrices, Random Convolution and Random Kronecker Products, are very similar to our recent work of Structurally Random Matrices(SRM). As far as I know, SRM is a conceptualization of both Scrambled FFT and Random Convolution. In particular, Scambled FFT is a special case of SRM with global randomizer and Random Convolution is a special case of SRM with local randomizer. SRM implies that we can change FFT by any fast (dense) transform without negatively influencing the quality of reconstruction. We pay some theoretical price if using a sparse transform such as a block diagonal transform although the degradation is very very small in simulation when global randomizer is used. In addition, when compressively sampling smooth signals like images, we can take advantage of the existence of a sparsifying matrix (even without knowing it at the encoder side) to eliminate the F operator (FFT) of Random Convolution. It means that we only need perform the operator F*D to the images and then randomly downsample the coefficients. Algorithmically, it is equivalent to do a random sign reversal of image pixels, applying some fast transform (e.g. FFT) and then random downsample the coefficients. If you look into the code of SRM (available at, you can see that we also used an approach of Random Kronecker product to speed up the software. It is also possible to show in theory that using that trick generates some degradation of performance, i.e. requiring a larger number (log(n)) of measurements than a completely random (full) matrix(e.g. full Gaussian) does.

I'm really excited to see these techniques also used at some other groups. Thank you for your blog. It's very informative and useful for me.

Thong Do (

Anonymous said...

Just to mention, Yin Zhang's very interesting discussion of random Kronecker product matrices also applies to the deterministic Kronecker product matrices that we use in our "Practical recipes for large-scale quantum simulation" (arxiv: 0805.1844).

As our review summarizes, there is a vast quantum physics literature on these Kronecker product objects ... once we appreciate that the quantum physicists insist on calling them by funny names like "matrix product states" and "Hartree-Fock states" and that geometers insist on calling them equally funny names like "Kahlerian algebraic varieties" and "Grassmannians". Everyone is talking about the same mathematical objects, however.

Quantum physicists embrace Kronecker-type objects for the same reason that the CS community embraces them: compact storage, fast calculations, and high-fidelity representations.

The "aha" required to link the CS and quantum physics is quite simple ... quantum states are compressible objects.

In fact, modern quantum information theory even provides a reasonably satisfactory informatic explanation of why quantum states are compressible objects (it's because quantum dynamical systems that are contact with thermal reservoirs evolve into compressible objects ... this can be regarded as the definition of a thermal reservoir).

Also, just to mention, the matrix product state (MPS) community plays tricks with outer products that I don't think have yet been tried in classical signal processing ... there is a finite possibility that these tricks might be broadly useful.

It appears that a golden era of sampling matrices is opening up ... :)

Igor said...

Thong and John. Thanks for the comments. I need to put them in front page. Thong, you may have noticed I put your construction at the top of the measurement section of the big picture ( ).