Pages

Friday, August 22, 2008

CS: Clarification on Compressive Coded Aperture Superresolution, Lacoste IR Coded Mask, 6D displays, Programmable Aperture Photography, Liquid Lens

When discussing with Gerry Skinner (CS: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging.), I pointed out to some of the current compressive sensing work being performed using coded aperture by Roummel Marcia and Rebecca Willett (Compressive Coded Aperture Superresolution Image Reconstruction, the slides are here).

That paper was very enlightning on how one would go from a certain measurement matrix with a certain property (RIP in this case) down to a physical implementation. Since, in my view, it is really important to understand fully the paper because it clearly makes a connection between theory and actual implementation, I sent some questions in to both authors and ask them to clarify some of the issues I could not clarify while reading the paper on the beaches of South of France (i.e. with no other reading material than that paper). Roummel and Rebecca kindly responded in a pdf document as there were many LaTEX expressions. With their permission, I am making their response available here.

While I have talked much about coded aperture for astronomy or nuclear medicine mainly with X-ray, there is an on-going DARPA program that is looking at coded aperture in the IR range (MWIR, mostly for surveillance purposes). Some of the papers related to this technology can be found in the SPIE library: Part I (September 2007) and Part II (August 2008). One should note here that the wavelength of interest (infrared) is interfering with the coded mask because the dimension of the radiation wavelength and the dimension of the aperture are of the same order of magnitude. I am still not quite sure why there is a need for active coded mask modulation though. I think I'll come back to this later.

In the meantime, let us stay in the visible range.




Last week's Siggraph saw many papers of interest but two grabbed my attention. The first one was Programmable Aperture Photography: Multiplexed Light Field Acquisition, Chia-Kai Liang, Tai-Hsu Lin, Bing-Yi Wong, Chi Liu, Homer Chen. The abstract of their paper reads:

In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two postprocessing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.







As expected compressive sensing is mentioned as a way to improve their sampling scheme at the very end of their paper.

Multiplexing a light field is equivalent to transforming the light field to another representation by basis projection. While our goal is to obtain a reconstruction with minimal error from a fixed number of projected images (Mu(x) in Equation 4), an interesting direction of future research is to reduce the number of images required for reconstruction. The compressive sensing theory states that if a signal of dimension n has a sparse representation, we can use fewer than n projected measurements to recover the full signal [Donoho 2006]. Finding a proper set of bases to perform compressive sensing is worth pursuing in the future.



The second paper was that of Martin Fuchs, Ramesh Raskar, Hans-Peter Seidel, and Hendrik P. A. Lensch entitled Towards Passive 6D Reflectance Field Displays. The abstract reads:

Traditional flat screen displays (bottom left) present 2D images. 3D and 4D displays have been proposed making use of lenslet arrays to shape a fixed outgoing light field for horizontal or bidirectional parallax (top left). In this article, we present different designs of multi-dimensional displays which passively react to the light of the environment behind. The prototypes physically implement a reflectance field and generate different light fields depending on the incident illumination, for example light falling through a window.

We discretize the incident light field using an optical system, and modulate it with a 2D pattern, creating a flat display which is view and illumination-dependent. It is free from electronic components. For distant light and a fixed observer position, we demonstrate a passive optical configuration which directly renders a 4D reflectance field in the real-world illumination behind it. A demonstration for this is shown on the right: outside illumination falls through an office window on a display prototype in front. As the sunlight moves with the changing time of day, the shadows, highlights and caustics naturally follow the movement.

We further propose an optical setup that allows for projecting out different angular distributions depending on the incident light direction. Combining multiple of these devices we build a display that renders a 6D experience, where the incident 2D illumination influences the outgoing light field, both in the spatial and in the angular domain. Possible applications of this technology are time-dependent displays driven by sunlight, object virtualization and programmable light benders / ray blockers without moving parts.



Hmmm, passive display that can indicate incident light, what about using the same display to multiplex the scene of interest for further recording on an FPA ? Eventually, other ways of sampling could include liquid lenses for webcams from Varioptic.

In a different area, when I saw him at Texas A&M, I knew Stan Osher was well known for the ROF method but deep down, I thought there was more to it, here is the beginning of an explanation.

4 comments:

  1. About your comment about the coding of aperture and possible interferences of some mask in the infrared domain, I'm wondering if there exists a kind of general theory of "what kind of masks can be used for this range of wavelengths ?".

    I mean, is it possible to imagine the coding of aperture for RADAR or Radioastronomy too, as for other optical device ? Could we code the sismic waves ? Or could we code physically the sound recorded by a microphone ? And what kind of masks are forbidden because of interferences ?

    A more general question would be : what class of linear perturbations of the "aperture" (multiplicative, convolutive, ... ) are possible for a given wavelength ?

    A lot of questions, but perhaps a huge number of potential applications behind.

    Laurent

    ReplyDelete
  2. Laurent,

    This is a very good question, I need to address it at some point. I also need to answer your 3d question as well. In the meantime, some part of the answer to the first question involves the physics of ultracold neutrons. If you are at eusipco, you may ask Laurent Duval about it :-)

    Igor.

    ReplyDelete
  3. OK. Thanks. Yes, I'm at EUSIPCO. Laurent is here too. I'll ask him.

    In my previous comment, the wavelength is of course not the only important parameter in this possible study. The physics of the wave propagation is in fact the most important element : sound and sismic waves are pressure waves, and not electromagnetic waves as for optics, radio, RADAR, ... so interferences with a mask will be completely different.

    Best,
    Laurent

    ReplyDelete
  4. OK. After some discussion with Laurent about ultra-cold neutron, I have understood the private joke ;-)

    Ultra-cold neutron remark seems to be the equivalent of the chairman's computational complexity question in Sig. Pro. Conference ;-)

    Best,
    Laurent

    ReplyDelete