Now in a different area. In 1908, Gabriel Lippman received the Nobel Prize and on March 20th of that year he published an article on obtaining 3D information from one camera (Epreuves reversible donnant la sensation de relief that can be translated into Reversible Film that Gives the Sensation of Landscape/3D)Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and hence collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and thus a vast number of probe spots may not provide any useful information. To this end we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely-used linear-programming-based methods, and can also recover signals with less sparsity.
One hundred years later, we are still looking for ways to devise new ways for getting cameras to provide additional information. Amit Agrawal for instance, makes a summary of the outstanding work he has done with colleagues (this presentation is 127 MB large but worth it) in Building A Hand-held Light Field Camera.
The video shows how one can put a filter between the lens and CMOS/CCD imager and eventually obtains additional information in the Fourier domain. But as some of you reading this blog have guessed a more fundamental way of recording all sorts of information might be through the use of compressed sensing, i.e. making use of the fact that the world is sparse. At ICASSP, a hundred year and ten days later after the paper of Lippman was read at the French Academy of Science, the Compressive Sensing Georgia Tech Transform Imager was unveiled (I mentioned it here). More information can be found here:
This approach enables programmable image signal processing in very low-power architecture; one could envision an imager plus the computation of a block 2D matrix transform, like 2D DCT transform in a megapixel imager consuming between 100 microWatts and 1mV.
In current megapixel imagers, one consumes over a Watt of power for the imager and Analog-to-Digital Converter. In a similar structure and resulting chip area, we can have a single chip image JPEG compression or single chip image enhancement computation consuming less power than a typical imager and analog-to-digital converter. One can easily compute additional transforms with minimal computational complexity, such as optical flow or depth from stereo computations. These approaches will be essential in low-power portable applications, such as hand-held teleconferencing, low-power tracking devices, and preprocessing for retinal prosthetics. We see this imager technology as an excellent candidate for Cooperative Analog-Digital Signal Processing (CADSP) approaches towards image and other two-dimensional signal processing . We discuss elsewhere the comparison of this imager to CMOS imagers and Focal-Plane imagers [1, 2].
It is in fact a CMOS architecture that allows one to obtain directly an image through any chosen transform. How does it do Compressive Sensing ? The ICASSP paper answers that in "Compressive Sensing on a CMOS Separable Transform Image Sensor". Its authors are: Ryan Robucci, Leung Kin Chiu, Jordan Gray, Justin Romberg, Paul Hasler, David V. Anderson. The abstract reads:
This paper discusses the application of a computational image sensor, capable of performing separable 2-D transforms on images in the analog domain, to compressive sensing. Instead of sensing and transmitting raw pixel data, this image sensor first projects the image onto a separable 2-D basis set. The inner products computed in these projections are computed in the analog domain using a computational focal-plane and a computational analog vector-matrix multiplier. Since this operation is performed in the analog domain, components such as the analog-to-digital converters can be taxed less when a only subset of correlations are performed. Compressed sensing theory prescribes the use of a pseudo-random, incomplete basis set, allowing for sampling at less than the Nyquist rate. This can reduce power consumption or increase frame rate.
The paper shows in particular the acquisition of scenes using the noiselet transform. An online paper entitled Low-Power Analog Image Processing using Transform Imagers by Paul Hasler, Abhishek Bandyopadhyay, and David V. Anderson has a similar description of the technology. In effect, the technology is pretty much agnostic and the use of a different kind of transform (a noiselet transform as opposed to a discrete cosine transform) is really how the imager becomes a Compressed Sensing hardware. Abhishek Bandyopadhyay's Ph.D. thesis on "Matrix Transform Imager Architecture for On-Chip Low-Power Image Processing" can be found here.