Thursday, January 31, 2008

Compressed Sensing: Thoughts on Coded Aperture and Compressed Sensing

In previous entries, we saw the large body of work at Duke and University of Arizona using Coded Aperture in order to perform hyperspectral measurements, superesolution and many other things.

It was also mentioned that coded aperture was used in Astronomy for the past forty years and that a recent surge of interest has been developing in X-ray and medical imaging (here and here). In this entry, we also saw the use of coded aperture to perform gamma radiation detection in large field of views. In another post, one could learn more about computational optical imaging, but the basic question is:

What is Coded Aperture ?

I took the following picture from [1] who took it from [2]


It certainly looks to me like a scheme that resembles that of Compressed Sensing or Compressive Sampling. In other words, the recorded data needs a computer decoding procedure to reconstruct the initial image. But the next question is:
Why would you want to do that ?

Think of every hole in the mask as being the equivalent of a pinhole camera. The reason you want to have several holes is because you want as much light (or other particles) as possible in your image in order to reduce the overall noise or you want to see the same object from different angles. With several holes, one now has several images super-imposed over each other. On the one hand, more light reach the image (less noise, high SNR) but on the other hand, one has to find a procedure in place to reassemble all these images together.


But how do you get that image back ?

The answer is well explained in Roberto Accorsi thesis [3],

Different point soruces are characterized by the pattern shift. In this sense the signal from the source is encoded. The first consequence is that a point source is not counted once, but once for every pinhole of the coded aperture, which is expected to increase the counting statistics and, consequently, the SNR. The second consequence is that each detector point is present information about many points of the source: in this sense information is multiplexed. The third consequence is that the record pattern R must be decoded to obtain an image. In intuitive terms, R must be scanned looking for known patterns which must be replaced with the point source that cast them. This is the same as separating the overlapped copies and is done by an operation known in signal processing as "matched filtering' ([5]). The technique prescribes to take the correlation of the collected data with the known pattern, in our case A. In a more general case in a pattern may be sought not through the pattern itself, but through an associated decoding pattern (or decoding array) G such that:

A x G = \delta

where x indicates periodic correlation, the matched filtering process is :

R x G

The result of this operation is to produce a perfect copy of the object O. In fact given the linearity of correlation operations and eq (1.1) one can write (using appendix A.3)

O_bar = R x G = O * (A x G)

where O_bar is by definition the estimate of the object or reconstructed image. This chain of equalities shows that the output of the imaging system is not directly the object but, as in all linear systems, a convolution of the object with a kernel, in this case A x G.

And so most of the effort in the literature has been dedicated to find the right set of mask A and decoding mask G that fulfill the periodic correlation result. I think this is where Compressed Sensing can bring some new approach to the subject. Whereas in normal Coded Aperture work, the emphasis is in finding the right mask whose combination will yield a clean autocorrelation function for all signals, what about a mask that yields a clean autocorrelation function for sparse signals using an L0-L1 reconstruction technique instead of a mask G? Well it does to a certain extent, in that there is an expectation that one is looking at determining the location of a points of light (or radiation) in a sea of background noise (especially in Astronomy or Medical Imaging.)

One should note that random masks also exist in that literature:




Clearly the method seems to have problems with images that have items other than point source (either light or radiation). In the context of using coded aperture for radiation systems and especially in the medical imaging where one is starving for photons/particles. Roberto Accorsi [3] mentions:
In conclusion, coded aperture imaging can produce a perfect copy of the object. It is a two step process: the first is physical, the projection of the source through the aperture; the second is computational decoding. The motivation to go through this complication is the potential of achieving a higher SNR. Chapter 4 is dedicated to quantifying this potential and understanding the hypothesis under which it is actually present.

If the object is extended, the advantage is not as large. Several point sources project overlapping mask patterns, so each detector records information from more than one source and measurements are not independent. Overlaps, i.e. multiplexing, cause some costs in terms of SNR, depending on the extent of superposition. We shall see that if an object takes the whole field, the SNR advantage may be lower than 1, i.e. a pinhole image would offer a better SNR, a result well known in literature ([31],[52]). Two conflicting needs arise: opening more pinholes, to increase the SNR advantage sqrt(N), and opening fewer pinholes, to reduce surperposition. This trade-off is the necessary (but, as we shall see, not sufficient) condition for the existence of an optimal mask open fraction.


Indeed from [4], "Because the reconstructed image of the URA contains virtually uniform noise regardless of the original object's structure, the improvement over the single pinhole camera is much larger for the bright points than it is for the low intensity points. "


I note that sometimes the issue is not just SNR as in the particle detection case but rather the ability to reconstruct a 3-d view of the scene of interest. Also I note the words from [3 page 192] about reconstructing 3D information

A thorough investigation shows that the restricted view angle results in severe undersampling of some regions of the fourier space [[72]. Criteria for successful imaging ( concerning the extent of the object in real and Fourier space) have also been given starting from the formulation of a sampling theorem which shows that the finite aperture bandlimits the spatial frequencies present in the object ([73]).

mmmuhh, severe undersampling of some regions of the fourier space. But it looks like six years later, the framework makes it possible to think about obtaining that third dimension [5]. In Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography by Roberto Accorsi [5] one can read the following abstract
Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging.

Irrespective, it would seem that the current effort in compressed sensing should allow for better reconstructions. Furthermore, I also note the relative similarity between the density of the coded aperture array that describe how much light passes through the mask and the Bernoulli matrices and the contruction of Radu Berinde and Piotr Indyk.

Since many copies of the same scene are superimposed on each other and that the reconstruction amount to a matched filter approach, could we use the Smashed filter approach devised by the group at Rice to find all the copies of the same item in the image plane ?

Another intriguing paper is that of Netanel Ratner, Yoav Y. Schechner, and Felix Goldberg on Optimal multiplexed sensing: bounds, conditions and a graph theory link [6]

particularly interesting are the following words:
"...Each of the measurements is prone to noise. This measurement noise propagates to an error in the estimation of the desired array of variables. This problem is exacerbated by constraints on the system and the object. For example, according to Ref. [3], constrains on system weight, price and power consumption of infrared spectrometers may require the use of uncooled detectors, which are generally noisy. In another example, the radiating object may be wide and diffuse [4], making its coupling into a narrow slit of a spectrometer inefficient, thus yielding a low signal, relative to the noise. For a given acquisition time, however, noise can be efficiently reduced using multiplexing [2–7,9,18–23]. Here, in each sequential measurement, multiple elements of the sought array (e.g., multiple spectral bands) are linearly weighted (multiplexed) and sensed simultaneously at a detector. In sequential measurements, the multiplex weighting changes. The set of weights is termed a multiplexing code. Once the measurements are done, computational demultiplexing derives the sought variable array.......The sequential measurements acquired at a detector are denoted by the vector a = (a 1,a2, . . . ,aN)t , given by a = Wi+υ . (1)
Here υ is the measurement noise. Any bias to this noise is assumed to be compensated for. Let the noise υ be uncorrelated in different measurements, and let its variance be σ 2 a .
Once measurements have been acquired under multiplexed sources, those measurements are de-multiplexed computationally. This derives estimates for the irradiance values of the object under the individual radiation sourcesˆi. The best unbiased linear estimator in the sense of mean square error (MSE) for the irradiance corresponding to the individual-sources is ˆi =W−1a . (2)
The MSE of this estimator [9, 18] is MSEˆi = σ 2 a N traceWtW−1 .......Hadamard codes are special cases of our analysis, hence this work significantly generalizes the known art..."

Strangely enough, the function used in the optimization problem devised to reduce the mean square error (MSE) in the reconstructed signal is the trace of the inverse of the gram matrix of the mask. This is reminiscent of similar techniques used to squash low dimensional manifolds in dimensionality reduction techniques (Trace, what is it good for ?).

In all, I cannot shake the sentiment that this coded aperture business is at the confluence of several fields (sketching, Linear programming, optics, radiation transport....) that is bound to grow using a compressed sensing perspective

On a different note, the isotopic element used for the coded aperture medical imager comes from this reactor.

[1] High-Sensitivity Dynamic Coded Aperture Imaging, Roberto Accorsi and Richard C. Lanza, Nuclear Science Symposium Conference Record, 2003 IEEE, 19-25 Oct. 2003, 1833- 1837 Vol.3

[2] E. E. Fenimore, and T. M. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Applied Optics, vol. 17, pp. 337-347, 1978.

[3] Roberto Accorsi, Design of near-field coded aperture cameras for high resolution medical and industrial gamma-ray imaging. June 2001, MIT.

[4] Coded aperture imaging: predicted performance of uniformly redundant arrays, E. E. Fenimore, 17 November 1978, Vol. 17, No. 22, Applied Optics.

[5] Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography, Roberto Accorsi, Applied Optics, Vol. 44, Issue 28, pp. 5872-5883

[6] Optimal multiplexed sensing: bounds, conditions and a graph theory link, Netanel Ratner, Yoav Y. Schechner, and Felix Goldberg, Optics Express, Vol. 15, Issue 25, pp. 17072-17092

No comments:

Printfriendly