{Update Nov. '08, I have made a list of most Compressed Sensing Hardware here ]
I initially started the listing of hardware implementation in compressed sensing though the presentation of the single pixel camera at Rice, the random lens imager at MIT, and some spectrometer work at Duke, then there was a hyperspectral imager at Yale and an analog imager at Georgia Tech. I have seldom mentioned the obvious hardware implementation of the whole MRI work spearheaded at Stanford but this one doesn't require hardware modification per se, rather a change of operation of the hardware. There is also the whole work surrounding the design of new A/D hardware named A/I (analog to information) that will be useful for capturing very high frequency signals either through a Nonuniform sampler (NUS) or a Random pre-integrator (RPI) .
More work has been performed on some of these concepts by looking at how one can use the sparsity of spectral measurement and spatial information. At MIT, I had mentioned the Random Lens Imager ( Random Lens Imaging, Rob Fergus, Antonio Torralba, Bill Freeman ). In that project, they mentioned the fact that using machine learning techniques one could provide depth information from single shots. Instead of doing that, they have resorted to putting additional filters in a lens to eventually provide depth information:
Image and Depth from a Conventional Camera with a Coded Aperture by Anat Levin, Rob Fergus, Fredo Durand, and Bill Freeman
the abstract reads:
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modi_cation to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocussing, or re-rendering of the scene from an alternate viewpoint.
Also using coded aperture and mentioned in another post is the work performed at Duke on spectrometers and hyperspectral imagers. The paper of Ashwin Wagadarikar, Renu John, Rebecca Willett, David Brady, entitled "Single disperser design for compressive, single-snapshot spectral imaging," in Adaptive Coded Aperture Imaging and Non-imaging Sensors, D. Casasent and T. Clark, eds., Proc. SPIE 6714 (2007) is still not available. However we know that the reconstruction of the spectral cube is performed using Gradient Projection for Sparse Reconstruction (GPSR) method (please note there is new version: GPSR 3.0) . While it is not yet released, Scott (a commenter) pointed out that there is another paper out on a similar subject: Michael Gehm, Renu John, David Brady, Rebecca Willett, and Tim Schulz, "Single-shot compressive spectral imaging with a dual-disperser architecture," Opt. Express 15, 14013-14027 (2007)
This paper describes a single-shot spectral imaging approach based on the concept of compressive sensing. The primary features of the system design are two dispersive elements, arranged in opposition and surrounding a binary-valued aperture code. In contrast to thin-film approaches to spectral filtering, this structure results in easily-controllable, spatially-varying, spectral filter functions with narrow features. Measurement of the input scene through these filters is equivalent to projective measurement in the spectral domain, and hence can be treated with the compressive sensing frameworks recently developed by a number of groups. We present a reconstruction framework and demonstrate its application to experimental data.
Where the solution is reconstructed using an Expectation-Maximization algorithm (e.g. a regularized version of the Richardson-Lucy algorithm). Other related information can be found in references [1] [2] [3].
With Hyper-GeoCam, a random lens imager based on the MIT design, we used random elements that would diffract wavelengths in different spatial locations, thereby enabling a random spectral-spatial mixing akin to the Duke design. The unmixing of these components is therefore directly amenable to Compressed Sensing reconstruction techniques. The main difference between those carefully crafted coded aperture imagers (Duke) and that of the random lens imager reside in the complexity of the reconstruction technique. The reason the random lens imager is not "taking off"(ahah) has to do with the fact that, in the calibration process, one cannot perform a matrix reconstruction with 10^6 elements even tough most of them are sparse. The current techniques just do not allow for it yet (they do not scale well above some hundreds constraints/variables). The crafted coded aperture imager provides an idea of where those terms are hence facilitating the location of the non-zero component of the calibration matrix. Another aspect of the problem is that with random materials, reflection may be wavelength sensitive and the whole calibration process is instrument specific. I am wondering aloud if the technique on Graph Laplacian Regularization for Large-Scale Semidefinite Programming [6] would not much help with respect to the calibration of the random lens imager.
References of interest:
[1] David Brady's lectures on computational imaging
[2] Coded aperture spectroscopy and spectral tomography
[3] Optical Imaging
[4] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady, "Single disperser design for coded aperture snapshot spectral imaging," Submitted to a feature issue on Computational Optical Sensing and Imaging (COSI), Applied Optics (2007)
[5]Ashwin Wagadarikar, Michael Gehm, and David Brady, "Performance comparison of aperture codes for multimodal, multiplex spectroscopy," Applied Optics 46, 4932-4942 (2007)
[6] Graph Laplacian Regularization for Large-Scale Semidefinite Programming, K. Q. Weinberger, F. Sha, Q. Zhu and L. K. Saul (2007). In B. Schoelkopf, J. Platt, and T. Hofmann (eds.), Advances in Neural Information Processing Systems 19. MIT Press: Cambridge, MA
I will let you know as soon as the SPIE paper comes out. The Applied optics paper is still under review. In the mean time, if someone has any questions, please feel free to e-mail me at the address on my website.
ReplyDelete-A
Forgot to say thanks, Igor, for setting up this useful resource. It is very interesting to read about how compressed sensing ideas are being used by groups all over.
ReplyDelete-A
Ashwin,
ReplyDeleteThanks for the good word.
To remind our readers, Ashwin's site is at:
http://www.disp.duke.edu/%7Eaaw14/
Igor.