Friday, February 10, 2012

Looking through walls and around corners with incoherent light: Wide-field real-time imaging through scattering media [updated]

Ori Katz, one of the authors of the Ghost Imaging compressive Sensing paper just sent me the following:

Dear Igor
.... We have just posted a work on the arXiv which I thought (hoped;-) might interest you: 
In this work we are showing that one can use scattered incoherent light for imaging objects hidden behind/reflected-from a scattering medium (e.g. a 'wall'). We do this by using the technique of high-resolution wavefront-shaping with SLMs. In short, we 'learn' the scattering properties of the medium and then apply the inverse phase-pattern to make the 'wall' either transparent or mirror-like. All the best,

Imaging with optical resolution through highly scattering media is a long sought-after goal with important applications in deep tissue imaging. Although being the focus of numerous works, this goal was considered impractical until recently. Adaptive-optics techniques which are effective in correcting weak wavefront aberrations, were deemed inadequate for turbid samples, where complex speckle patterns arise and light is scattered to a large number of modes that greatly exceeds the number of degrees of control. This conception changed after the demonstration of focusing coherent light through turbid media by wavefront-shaping, using a spatial-light-modulator (SLM). Here we show that wavefront-shaping enables widefield real-time imaging through scattering media with both coherent or incoherent illumination, in transmission and reflection. In contrast to the recently introduced schemes for imaging through turbid media, our technique does not require coherent sources, interferometric detection, raster scanning, or off-line image reconstruction. Our results bring wavefront-shaping closer to practical applications, and realize the vision of looking 'through walls' and 'around corners'.

This is outstanding.and here is why. I initially asked Sylvain Gigan on the matter to make sure I did not misunderstand too much. As you probably recall Sylvain is a one of the author of Measuring the Transmission Matrix in Optics : An Approach to the Study and Control of Light Propagation in Disordered Media. He told me ( i.e. all inaccuracies are mine) that the paper is beautiful because in part, it is using incoherent light. I then, specifically asked him how, in this paper, the authors seemed to be finding the measurement matrix faster than say in his own experiment (see above). Sylvain pointed out that they use the memory effect so that after say learning the first row of the measurement matrix, this memory effect allows them to automatically construct other neighboring rows. In short, this is not unlike the coded aperture systems mentioned yesterday in Learning a Circulant Matrix by Yangyang Xu, Wotao Yin, Susan Chen and Stanley Osher. Indeed the coded aperture systems are an instance of Toeplitz measurement matrices that are really parametrically dependent upon a low dimensional set of variables. This work seems to work only for very thin "walls" and hence it is ideal for looking around corners as reflection is really a little like a thin wall assumption. 

I then asked Ori another set of questions:

What sort of optimization do you go through to get the SLM to be providing a point on the CCD ? Is it some sort of convex optimization or more like a monte carlo or greedy algorithm ? How long is this process ?

We are running a genetic algorithm optimization which looks for the optimal phase pattern that will maximize the intensity of the point source on a selected camera pixel. In each optimization step (generation) the algorithm test 30 different patterns and keeps the best ones for generating the phase-patterns for the next step.  We typically ran a few hundreds of such generations before stopping

It looks like the incoherence of the source is really helping you in that process as it enables everything around the point to be imaged the same. Do you have any idea of how far this imaging can go, i.e. How far away from the point, do you get to image the scene ( I am not sure I am saying this right) ?

The incoherent illumination indeed helps in generating a smooth image (it doesn't really matters in the optimization procedure). The limit for the field-of-view around the pre-optimized point source position is dictated by what people call the 'optical memory-effect' - the maximum angle for which the single phase-pattern correction still holds. When looking through optically-thick multiply scattering media (e.g. a wall) this angle is given by: theta~lambda/L, where L is the thickness of the medium and lambda is the wavelength. When looking at light reflected from a random medium (our Fig.3), the angular field of view will be theta~lambda/L_s, where L_s is the scattering mean-free-path for light in the medium (~ the penetration depth of the light into the wall). In our experiment, we had a angular field of view of theta ~5 mrad, (equivalent to an image of size ~2mm at a distance of 40cm from the wall) in reflection geometry, and something around 15 mrad in the transmission experiments . For an object out of the 'memory effect' angle, the correction intensity falls exponentially.

Thank you Ori  for the beautiful paper and attendant explanations and Sylvain for the initial insight.

  Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: