Pages

Monday, July 25, 2011

Comparing a Single Pixel Camera, a Traditional Coded Aperture and a Compressive Coded Aperture Image of Saturn

Laurent Jacques reminded of this excellent comparison study between different compressive sensing architectures and traditional coded aperture featured in Compressive Optical Imaging: Architectures and Algorithms by Roummel F. Marcia, Rebecca M. Willett, and Zachary T. Harmany



I like the challenge section at the very end:

....First, directly implementing CS theory by collecting a series of independent pseudorandom projections of a scene requires either (a) a very large physical system or (b) observations collected sequentially over time. This latter approach is successfully used, for instance, in the Rice Single Pixel Camera. Alternative snapshot architectures (which capture all observations simultaneously) with a compact form factor include coded aperture techniques. These approaches impose structure upon the pseudorandom projections, most notably by limiting their independence. As a result, the number of measurements required to accurately recover an image is higher with snapshot coded aperture systems.
A second key challenge relates to the nonnegativity of image intensities and measurements which can be collected by linear optical systems. Much of the theoretical literature on CS allows for negative measurements and does not consider nonnegativity during the reconstruction process. In this chapter we have shown that (a) explicitly incorporating nonnegativity constraints can improve reconstruction accuracy and (b) pre-processing observations to account for nonnegative sensing matrices improves reconstruction performance because of central assumptions underlying some fast CS algorithms. However, one important open question is whether novel approaches based on nonlinear optics can successfully circumvent these positivity constraints to improve performance.

I added the links to the text for a rapid connection to the example given. But the item that I have found to be most interesting because it does not seem to be a very active area of research is the following statement on the fill factor of the coded aperture:

The coded aperture work discussed earlier is focused on developing a coded aperture system with a fill factor of 50%; i.e., 50% of the positions in the coded mask were opaque, and the remainder allowed light through to the detector. In low-light settings, this high fill factor is desirable because it allows a significant proportion of the light through to the detector and “wastes” very few photons. This approach is particularly effective when the scene is sparse in the canonical or pixel basis (e.g., faint stars against a dark sky). However, when the scene is sparse in some other basis, such as a wavelet basis, and not sparse in the pixel basis, a large fill factor can cause significant noise challenges. These challenges are described from a theoretical perspective in [28]. Intuitively, consider that when the fill factor is close to 50%, most of the coded aperture measurements will have the same average intensity plus a small fluctuation, and the photon noise level will scale with this average intensity. As a result, in low light settings the noise will overwhelm the small fluctuations which are critical to accurate reconstruction unless the scene is sparse in the pixel basis and the average intensity per pixel is low. These challenges can be mitigated somewhat by using smaller fill factors, but in general limit the utility of any linear optical CS architecture for very low-intensity images which are not sparse in the canonical basis [28] 

Is this calling for a third way of mixing light rays ? does this call for sparse measurement matrices ? And there are many important questions to be asnwered thanks to Roummel MarciaRebecca Willett, and Zachary Harmany.

David San Segundo let me know of a similar paper but a different paper entitled compressice sensing for practical optical imaging systems: a tutorial.

It talks about different technology implementation but also makes a good reference to the Donoho-Tanner phase transition, which in a tutorial dedicated to hardware is good news! Thanks David!

No comments:

Post a Comment