Pages

Friday, September 26, 2008

CS: A Small Discussion with Ramesh Raskar and the Camera Culture Lab at MIT.

[updated since first published 2 hours ago]

I had a small enlighting e-mail exchange with Ramesh Raskar (one of the authors of several fascinating new imaging sampling hardware featured here and on this blog) the other day about the connection between his work and other coded aperture work such as that of Gerry Skinner (CS: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging.) And I think we agree on some parts, yet he made a clearer point as to why he is using lenses as opposed to a lensless set-up:

...Gerry is so right about being careful about taking linear combination of images. A lot has been learned in coded apertures.
The excitement about capture-side of CS in imaging unfortunately tends to skip issues in whether there is a realistic gain wrt reconstruction noise and problems due to diffraction.
In general coded aperture doesnt work when the point spread function is extremely large and what you are imaging is a area source. For astronomy, the PSF is as large as the sensor but one is imaging only pt sources.
That is exactly the reason we designed coded aperture using lenses. They limit PSF to smaller region allowing us to maintain a reasonable SNR gain even after accounting for reconstruction noise.
Another use was coded aperture for lightfield capture where the masks were close to the sensor (Heterodyned lightfields), again limiting the PSF to a small number of pixels.
...please also refer to Roberto Accorsi and Prof Berthold K P Horn (MIT) who analyzed the effect of coded aperture (lensless) for point-like versus area scenes:

* Roberto Accorsi, Francesca Gasparini and Richard C. Lanza, "Optimal coded aperture patterns for improved SNR in nuclear medicine imaging", Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, Volume 474, Issue 3, December 2001, Pages 273-284
*Roberto Accorsi,  "Analytic derivation of the Contrast to Noise Ratio in Coded Aperture Imaging",  Personal Communications

On the reason he is not using any of the solver currently used in Compressive Sensing, Ramesh said the following:
 ...coded aperture with single shot doesnt really conform to CS and in our case we had equal number of observations/unknowns...
I note that in the following "when the point spread function is extremely large and what you are imaging is a area source" we are hitting on the issue of incoherence of dictionaries with the object being imaged. Also, CS is really about mixing several pieces of information with the attendant knowledge that these informations are sparse in some fashion. Right now, coded aperture has indeed never used the sparsity of the target as its main engine of discovery. But this is changing as witnessed by the extraordinary hardware development at Duke

Ramesh now teaches at MIT  and you can check his Computational Camera and Photography course presentation here. He also heads the Camera Culture Lab and he is looking to hire Graduate Students, MEng, PostDocs and UROPs starting Fall 2008 (I'll add this to the CSJobs section).

The Camera Culture Lab's presentation has the following introductory text:

We focus on creating tools to better capture and share visual information. The goal is to create an entirely new class of imaging platforms that have an understanding of the world that far exceeds human ability and produce meaningful abstractions that are well within human comprehensibility.

The group conducts multi-disciplinary research in modern optics, sensors, illumination, actuators, probes and software processing. This work ranges from creating novel feature-revealing computational cameras and new lightweight medical imaging mechanisms, to facilitating positive social impact via the next billion personalized cameras.

With more than a billion people now using networked, mobile cameras, we are seeing a rapid evolution in activities based on visual exchange. The capture and analysis of visual information plays an important role in photography, art, medical imaging, tele-presence, worker safety, scene understanding and robotics. But current computational approaches analyze images from cameras that have only limited abilities. Our goal is to go beyond post-capture software methods and exploit unusual optics, modern sensors, programmable illumination, and bio-inspired processing to decompose sensed values into perceptually critical elements. A significant enhancement in the next billion cameras to support scene analysis, and mechanisms for superior metadata tagging for effective sharing will bring about a revolution in visual communication.

Project topics include (i) computational photography via novel feature revealing cameras; (ii) femtosecond analysis of light transport with sophisticated illumination; (iii) Second Skin, a bio-i/o platform for motion capture via wearable imperceptible fabric; and (iv) universal encoder for sharing and consumption of visual media.

Keywords: Computational imaging, Signal processing, Applied optics, Computer graphics and vision, Hardware electronics, Art, Online photo collections, Visual social computing.

Lots of good stuff, I wish I could be part of that adventure.

No comments:

Post a Comment