Friday, December 23, 2011

The Trial That Wasn't

An anonymous reader wrote in the comments section of yesterday's entry that we should pay attention to the discussion and attendant comments that are taking place on Nature's site
'...Three-dimensional technique on trial
Critics take a hard look at ankylography, a proposed method for revealing molecular structures from single pictures.
Eugenie Samuel Reich..."
Can we make this a more dramatic title please ? Nature needs to sell more paper and no 1015 seconds is not a femtosecond (what's 30 orders of magnitude among friends ?). 

Before we get into the listing of what possibly could be wrong or right, here is why I am a little uneasy about this. The authors make available their solvers for all to try. Let us imagine that the solver naturally enforce an unrecognized sparsity assumption that the authors themselves have not recognized. Where is the burden of proof ? On the side of the original authors who need to make a connection with what their solver is really doing or on the side of the critics ? I am not sure I know but it may well be unraveled by trying the solver out on many different cases. Let us check the substance of the comments. The initial critics make no mention whatsoever to a sparsity issue/constraint: You cannot have a negative result (this algo does not work) if you have not done some minimal research on all possible known possibilities: Compressive sensing has been in existence since 2004 now and not mentioning a sparsity based assumption is becoming not acceptable. Here is a more insightful comment from none other than Stefano Marchenisi who  makes the following point:

The problem in my view is that at short wavelengths the diffracted intensity drops quickly at high angles, while long wavelengths lack the penetration required to perform 3D imaging of most materials, limiting the applications of this technique.
We found that computational focussing was insufficient to provide the true 3D structure of a general object1:
"It should be noted that this computational focusing does not constitute 3D imaging, but is simply the propagation of a 2D coherent field. The optical transfer function (OTF) for this imaging system is the Ewald surface, and in this situation with coherent illumination the integrated intensity of the image does not change with defocus (a consequence of Parseval's theorem and the invariance of the diffraction intensities with defocus). That is, it is unlikely that numerical defocusing of a complicated object could give results that could be as easily interpreted as for the pyramid-membrane test object used here. This situation is unlike partially-coherent imaging in a microscope, where out-of-focus objects contribute less power to the image and some optical sectioning can be carried out." 1
Having said that, it may be possible to apply further constraints, for example if the object is sufficiently sparse 2 to extend the technique.

1 H. N. Chapman et al. "High-resolution ab initio three-dimensional x-ray diffraction microscopy"

2 Compressive Holography, David J. Brady, Kerkil Choi, Daniel L. Marks, Ryoichi Horisaki, and Sehoon Lim, Optics Express, Vol. 17, Issue 15, pp. 13040-13049 (2009)

I think Stefano makes one of the most salient point that will make the difference between investigating science with or without priors more pronounced in the future. Two days ago we had an imaging technique that reconstructed an image based on some sort of inpainting


"....
Sparsity-based super-resolution
In mathematical terms, the bandwidth extrapolation problem underlying sub-wavelength imaging corresponds to a non-invertible system of equations which has an infinite number of solutions, all producing the same (blurred) image carried by the propagating spatial frequencies.
That is, after measuring the far field, one can add any information in the evanescent part of the spectrum while still being consistent with the measured image. Of course, only one choice corresponds to the correct sub-wavelength information that was cut off by the diffraction limit. The crucial task is therefore to extract the one correct solution out of the infinite number of possibilities for bandwidth extension. This is where sparsity comes into play. Sparsity presents us with prior information that can be exploited to resolve the ambiguity resulting from our partial measurements, and identify the correct bandwidth extrapolation which will yield the correct recovery of the sub-wavelength image...". 
At some point people will ask if this is legal in about the same way we have seen the debate between Frequentists and Bayesians. I think we may have a very similar issue here (unrecognized as it may).

Kevin Raines one of the original author wrote a long note (I have highlighted what I thought was an important point)

I would like to thank the community for providing vigorous feedback on our new imaging technique `ankylography'. We sincerely appreciate the debate and remain confident that our work will withstand further analysis and replication.
I will be presenting comments upon this debate that may not reflect the views of my co-authors as I have not been a member of Prof. Miao's research group for some time and am currently a graduate student in the Department of Applied Physics at Stanford University.
Before offering my comments, I want to clarify that our ankylography paper reports the discovery of a new method for coherent diffraction imaging using single or very few two-dimensional, spherical diffraction patterns to obtain a three-dimensional image. To support this claim, we provided:

1. Numerical evidence. We presented a series of non-trivial reconstructions, including a reasonable noise model, of objects of sufficient complexity to be of scientific interest: a glass nano-particle and a polio virion.
2. Experimental evidence. The experiment we performed was simple, but it verified the experimental feasibility of the principle of ankylography and showed that the technique tolerated noise at experimental levels. Since then, several experimental demonstrations that represent more complex and practical situations of ankylography have been published.

3. Theoretical Analysis. Our theoretical analysis developed a mathematical intuition for the validity of the principle of ankylography. It is important to note that none of the authors are mathematicians and that we did not intend to present a complete mathematical theory of anklography. We accept that there are scaling limitations to ankylography (as we identified in our paper). However, it is our goal and stated aim to work within the limitations to develop a robust imaging technique.

I will now comment briefly upon the two recent articles that aired some criticisms of ankylography.
1. Fundamental limits of ankylography due to dimensional deficiency by Wei. This paper analyses ankylography in terms of channel capacity. I recommend the vast body of theoretical and numerical work that has been done in compressed sensing for a counter viewpoint on information measures in imaging to those interested. In my view, channel capacity is not the best information measure for ankylography, and there certainly are other measures. Moreover, it is not just the scaling – in the limiting sense – that is important to ankylography: the details of the scaling are vital. That is, the most important case in ankylography is that of relatively small samples; we do not intend to reconstruct objects of size, say, 10^4 in each dimension.
2. Non-uniqueness and instability of ankylography by Wang et al. Our detailed numerical protocols were not followed in this work, and therefore it is not surprising that the authors obtained their disappointing results. Since the paper only appears to reflect the authors' interpretation/implementation of their version of ankylography, it does not diminish our work in any way.
I will now comment upon this news report by Eugenie Samuel Reich. Overall, it is an excellent article, and I commend Nature and the author for their good work. I have three suggestions:

1. It is worth noting that several further experimental demonstrations of ankylography have recently been published:
i. C.-C. Chen, H. Jiang, L. Rong, S. Salha, R. Xu, T. G. Mason and J. Miao. Three-dimensional imaging of a phase object from a single sample orientation using an optical laser. Phys. Rev. B 84, 224104 (2011).
ii. M. D. Seaberg, D. E. Adams, E. L. Townsend, D. A. Raymondson, W. F. Schlotter, Y. Liu, C. S. Menoni, L. Rong, C.-C. Chen, J. Miao, H. C. Kapteyn and M. M. Murnane. Ultrahigh 22 nm resolution coherent diffractive imaging using a desktop 13 nm high harmonic source. Opt. Express 19, 22470-22479 (2011).
The article states that "Miao has since made clear that the technique does not work on objects larger than 15 x 15 x 15 volume pixels, a size dependent on the resolution of the imaging technology". I wish to clarify that that limitation only applies to the simple demonstration code using a simplified algorithm. We do have more complex code, explained in detail in our paper, that will work on substantially larger sized objects. Also, we did not "train" our code on any particular type of structure, and implemented only very general constraints even in the more complex algorithms. So I would suggest replacing "the technique" with "the simple demonstration code" in the news article. By way of example, the source code for the ankylographic reconstruction of a simulated sodium silicate glass structure with 25 × 25 × 25 voxels has been posted at http://www.physics.ucla.edu/research/imaging/Ankylography. Even at these modest sizes that we obtain with relatively simple reconstruction codes, we have found that there are scientifically interesting samples that can be explored, which is why I am perplexed by, and disagree with, Marchesini's quote in the news report. Perhaps it was not clear that the limitation of 15^3 was only for the very simple matlab demonstration code? In any case, by my analysis, ankylography is quickly advancing upon its promise of becoming a useful imaging tool.

3. The article also states that "Despite the appeal of ankylography, many researchers were perplexed by what they see as a violation of the basic physical principle that you cannot get complete, 3D information from a single flat picture" Ankylography requires a spherical diffraction pattern. In practice, one can obtain a spherical diffraction pattern from a flat detector – if it is large enough – through a mathematical mapping that we describe in detail in our paper. In fact, in our original paper, we indicated that spherical detectors are to be preferred. Additionally, in the case of a flat detector, it would have to be much larger than the sample size. The paragraph continues "In particular, the picture will provide incomplete information about the interior of a subject, and critics argue that many possible 3D structures could generate the same image" In ankylography, the sample under view must be semi-transparent to the illuminating, coherent beam, and we discuss uniqueness in our paper. In my view, only in cases of very large sample sizes will uniques potentially become a problem, but even then only pathologically (i.e. with small probability). So, for a non-specialist, a useful thought experiment might be to consider imaging a transparent grain of rice with a large detector the size, say, of a queen matress at a good distance. Mathematically, the detector would make a measurement that maps onto a sphere in a three-dimensional space, despite the fact that the measurement is made in two-dimensions.
Finally, in response to Marchesini's comment above: I agree that the photon statistics may present a challenge for ankylography in certain applications. For example, the scattering from a single protein will likely be too weak to use ankylography directly (which is why we don't claim this application in our original paper). In our paper, we made precise calculations about the photon statistics and incorporated the noise due to finite photon number into our simulations, using the standard Poisson distribution. However, we wish to limit the debate here to the general feasibility of ankylography assuming that the diffraction pattern with a reasonable signal-to-noise ratio is obtainable.
Kevin, nice answer, but please get a webpage on the interwebs.

In all, I really don't see that much of an issue as the code is available for all to try. There maybe a hidden enforcement of sparsity in this solver and nobody can really figure it out... yet. At some point, instead of waiting seven years for some mathematical proof, I suggest any enterprising researcher to look into how robust this code is with respect to the sparsity of the sample. If there is a phase transition, it'll be a paper and an easier way for the math folks to anchor this "empirical result" into some new research direction or some older concepts

Credit Photo: NASA, Comet Lovejoy, ISS030-E-014350 (21 Dec. 2011) --- Comet Lovejoy is visible near Earth’s horizon in this nighttime image photographed by NASA astronaut Dan Burbank, Expedition 30 commander, onboard the International Space Station on Dec. 21, 2011.


No comments:

Printfriendly