Tuesday, August 12, 2008

CS: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging.

Gerry Skinner is a specialist of Coded Aperture Imaging as it relates to astronomical observations of X-ray events in the sky. Because of the peculiar title of his insightful paper entitled Coded Mask Imagers: when to use them, and when not ? I decided to contact him and ask him if he knew about Compressive Sensing. It is very rare to have a specialist tell you that the technology he has worked on for a while is not the technology you should use! Here a small back and forth discussions we had. Gerry gave me the permission to publish this on the blog. As an aside, I did not know about the group testing fertilizer problem. As I mentioned before, it might be wise to view this conversation as a way for the Compressive Sensing community to clearly articulate how and when coded aperture should be used as opposed to direct systems. After having introduced the subject to him through by pointing to the paper of Roummel Marcia and Rebecca Willett [1], here is the discussion that ensued:

Gerry responded to my initial query with:
I was aware of compressed sensing only in very general terms and, interested by your mail, I have now done a little bit of browsing around on the subject. There are clearly links between coded mask imaging and compressive sensing, though really because both are members of a wider class of techniques (though I suppose you could apply the 'compressive sensing' to that whole class). I have long been very aware of the links of coded mask imaging with other indirect measurement techniques, for example, radio and optical interferometric imaging, Fourier transform spectroscopy and synthetic aperture radar (also radars involving transmitting a coded string of pulses). Further afield I have noted analogies with 'experiment design' problems - for example if you want to examine the effects of different fertilizers and crop treatments, instead of dedicating one part of your field to each, one can apply carefully chosen combinations.

Sometimes you are obliged to measure something other than what you would really like to get to. Other times there is an advantage in doing so.
Coded mask imaging is used for a combination of these reasons. In many cases in high energy astronomy you cant build a focusing imaging system that would record directly the image that you would like to get (at present that is still the case for gamma-rays). There is an alternative - a pinhole camera. In high energy astronomy, compared with that, but not compared with a hypothetical focusing system, a coded mask system usually wins.

Even if you are forced to non-focusing optics, it is important to think about when and why coded mask imaging wins. Apart from some very specific cases with few photons, it only does so if there is a detector background noise that doesnt increase as you let more and more photons in from the source. In space this is often the case because of high energy particles interacting in the detector. On the ground, attempts to use coded mask for medical imaging tend to find that there is no advantage to be obtained. A parallel situation occurs with Fourier Transform Spectroscopy. In the far infrared ITS offers an advantage over a scanning spectrometer IR because the thermal noise in the detector remains the same even if you are making measurements of many wavelengths at the same time. It doesnt help in the visible because there the detector noise can be mainly photon noise (shot noise on the signal).

That is why, despite having worked on them for 30 years, I spend a lot of time trying to persuade people not to use coded mask telescopes, or at least not to assume they can do magic in all circumstances (it is one of the reasons that I would like to develop gamma-ray lenses). Similarly with any indirect measurement technique you have to examine very carefully what you are gaining.

As regards the specific questions in your mail :

The data from two major space missions that are currently in orbit and that use coded masks, along with software to allow their analysis, are public. I refer to NASA's Swift mission (http://heasarc.gsfc.nasa.gov/docs/swift/archive/) and ESA's Integral http://isdc.unige.ch/index.cgi?Data+info
Getting into analysis of complex data such as is produced by these missions is not trivial, though. The file formats and data analysis systems are both are based on the same FITS file format, but are not the same for the two missions .

Because the instrument design is such that cross-correlation algorithms are close to the best one can do, most of the standard software depends on this. Non-linear techniques have been used (particularly ones based on likelihood and on maximum entropy) and for Integral some software is available to do this. Personally I have never found seen any advantages that outweigh the problems in quantitative interpretation of the results obtained that result from the implicit non-linearity.


Recoiling from his observations, I then asked:

I have seen you reference Roberto Accorsi in a previous paper [And mentioned in this blog here], is this affirmation (on the ground...) coming out of discussions with him ?


Gerry kindly responded with:

I have had discussions with him recently but also with quite a number of other people in medical and industrial fields in several countries over a good many years. Even where there has been initial enthusiasm, there has been an eventual realization that coded aperture techniques offer them no real advantage. In addition to the all-important background noise issue that I mentioned, there is also a question of how the noise is distributed over the image (or equivalent space) after the reconstruction process. Still talking in imaging terminology, with a one-to-one correspondence between measurements and final image pixels there is less effect of a bright region on low brightness ones than with, for example, a coded mask imager which spreads noise more evenly over the image. Unless you are dominated by signal-independent detector noise, the image noise maybe lower on average, but it tends to be higher for bright regions. This is often the opposite of what physicians want.


After I acknowledged the following:

A parallel situation occurs with Fourier Transform Spectroscopy. In the far infrared ITS offers an advantage over a scanning spectrometer IR because the thermal noise in the detector remains the same even if you are making measurements of many wavelengths at the same time. It doesn't help in the visible because there the detector noise can be mainly photon noise (shot noise on the signal).

Gerry further made the point:
This issue is crucial. You have to be able to explain where and why a proposed indirect technique offers an advantage.

A long time ago I developed something very similar to the one-pixel camera. We placed a rotating disk in front of a non-imaging Indium Antimonide IR detector (the only sort available at that time). It was arranged to place a series of near-orthogonal coded-mask-like patterns in front of the detector. Although it worked in general terms and we demonstrated it on a telescope in the Canary Isles, before it was fully debugged pixellated IR detectors developed for the military started becoming available and the incentive for the work disappeared - they were bound to do better.
Have you found and read Martin Harwit's book on "Hadamard Transform Optics" written at about the same time (1979) as the above work and containing the same ideas (and more) ?...


.... One of the incentives for us in using these techniques ( Likelihood and Maximum Entropy [see refs. [3, 4, 5]]) is that they can properly handle Poisson statistics where least-square techniques do not. As a by product of this they normally implicitly introduce a positivity constraint that some people like on the basis that sky intensities cannot be negative (though I find the associated non-linearity problematic). And they can be configured to use whatever prior information you have about the scene (provided you can formulate what it is that you think you know).
Let us note that Christopher Brown made a similar comment at the end of his thesis entitled "Multiplexing and Random Arrays" back in 1972 [2, p. 132]

Multiplexing can only be recommended with care, since the advantages it provides with signal-independent noise can be canceled or overcome by signal dependent noise. Detailed analysis and comparison with a competitive non multiplexing system revealed that multiplexing would often be advantageous, even discounting nonlinear effect in the film. The analysis also showed the amount of nonlinear advantage which would overcome disadvantages produced by multiplexing.

There are many themes that have been picked up on in CS already and I am sure more most of these themes can be made clearer across different engineering fields. The arrival of wavelets in the numerical analysis world provided some common ground and eventually advances in very different techniques and engineering fields. I can clearly see CS helping us understand better indirect systems and lay better theoretical bounds while providing additional data and means of discovery. Right now, one can only hope to tackle the different CS hardware implementations as is currently done for coded mask telescopes[ 6].

Gerry eventually added:
The only thing that I would add is that in experiment design often the problem is assumed to be linear, or described by a variance/co-variance matrix, and the example I gave [fertilizer problem ] may not be well chosen. A couple of references from the vast literature on this subject are

Association Schemes: Designed Experiments, Algebra and Combinatorics
By Rosemary Bailey
Contributor B. Bollobas
Published by Cambridge University Press, 2004
ISBN 052182446X, 9780521824460

Mead R. The non-orthogonal design of experiments. Journal of the Royal Statistical Society Series A 1990; 153: 151-201.

I suspect that you will find that there is much in common with what you term compressive sensing.

Gerry's list of publications can be found here.

[1] Roummel Marcia and Rebecca Willett, Compressive Coded Aperture Superresolution Image Reconstruction (the slides are here).
[2] Multiplex imaging with random arrays, Christopher M. Brown, Ph.D. Thesis, Institute for Computer Research, University of Chicago, 1972.
[3] Analysis of data from coded-mask telescopes by maximum likehood. Skinner, G. K., Nottingham, M. R., Nucl. Instrum. Methods Phys. Res., Sect. A, Vol. 333, No. 2-3, p. 540 - 547
[4] Two new methods for retrieving an image from noisy, incomplete data and comparison with the Cambridge MaxEnt package., Ustundag, D.; Queen, N. M.; Skinner, G. K.; Bowcock, J. E., International Workshop on Maximum Entropy and Bayesian Methods, MaxEnt 90, p. 295 - 301
[5] Reconstruction of images from a coded-aperture box camera, Hammersley, Andrew; Ponman, Trevor; Skinner, Gerry, Nuclear Instruments and Methods in Physics Research Section A, Volume 311, Issue 3, p. 585-594.
[6] Sensitivity of coded mask telescopes, Skinner, Gerald K., Applied Optics IP, vol. 47, Issue 15, pp.2739-2749


Credit photo: NASA/JPL/Space Science Institute, the closest photo of Enceladus ever. The image was taken with the Cassini spacecraft narrow-angle camera on Aug. 11, 2008, a distance of approximately 1,288 kilometers (800 miles) above the surface of Enceladus. Image scale is approximately 10 meters (33 feet) per pixel.

No comments:

Printfriendly