Thursday, August 19, 2010

A small Q&A with Rick Trebino, the inventor of FROG.

Following up on previous entries on the possibility of FROG being an instance of compressed sensing (which is still a question, part 1, part 2) I asked Rick Trebino, the inventor of FROG, some questions and he kindly responded to them. I initially started with:

Igor:
By the way, I stumbled on FROG about five years ago but did not get to make the parallel with compressed sensing only recently. If you have not had time to check compressive sensing, it really is a way of acquiring and reconstructing signals with a much lower number of samples than required by the Shannon-Nyquist theorem. This is possible because the signal is known in advance to be sparse. The main difference between FROG and CS so far seems that the measurement is nonlinear in FROG and that there is no stated assumption of sparsity of the underlying signal. With that in mind here are my set of dumb questions:

1- One of the reason I mentioned that FROG might be an instance of nonlinear compressed sensing revolves around the fact that you are reconstructing a pulse, which while it might be complex, is alone in a "sea" of nothing. Is that accurate? Are the pulses that you are recovering, signals that are "sparse" in time, i.e. the smallest pulse is really a pulse surrounded by no signal ?
Rick responded with:
We don't need to assume that the pulse is alone; the sea of zeroes confirms that it is.
I then asked:
2- Have you had any instances where your reconstruction is known to fail or does not yield accurate results ? Are those instance related to the laser pulse being "long", not sparse enough ?
Yes. If the nonzero region of the trace is cut off, it often fails (but this seems reasonable, as key information is missing). And for very complex pulses in the presence of noise, it also can fail. When it does, disagreement between the measured and retrieved traces tells us this, and we know not to trust the result. This doesn't seem to be related to sparseness, however. There is, however, one interesting case perhaps related to sparseness, and that is the "ambiguity" of a pulse with well-separated frequencies, whose relative phases cannot be measured at all, even in the absence of noise; they have the same FROG traces, independent of the relative phases. Interestingly, alternative methods developed to measure such pulses also have this ambiguity.
Igor::
3- Once you get a FROG spectrogram, do you use all the data from the spectrogram to reconstruct the full signal ? have you ever toyed with the possibility of reconstructing the signals with a (random) subset of the elements of the spectrogram ?
Yes, we always use all the data. A clever fellow named Craig Siders did develop a FROG algorithm that began by using only every eighth point. Then every fourth, etc., until the last few iterations, when he used all the data. This sped up the code considerably. But today's laptops are so fast that we haven't needed it. On the other hand, it might be worth reconsidering, as faster is always better! We have also used random subsets of datapoints to do a bootstrap computation of error bars, which worked very well.
Igor:

4- If I understand correctly GRENOUILLE is an hardware implementation of FROG. Is somebody selling that hardware ? in the affirmative, who is it ?
Yes. I formed a company a few years ago (Swamp Optics, www.swampoptics.com), which has by now sold GRENOUILLEs to most ultrafast labs, and, as a result, has forced a whole generation of ultrafast scientists to use some utterly frivolous acronyms in their daily work and publications.
Rick also rightly pointed out the following:
But, in the meantime, I should mention that, at the moment and from what I understand of it (which isn't much), it seems to me that FROG may actually be the opposite of compressed sensing! The FROG trace actually contains much more data than is necessary to determine the pulse (an NxN array to determine N intensity points and N phase points). This is one its strengths, as it's easy to take all this data in a camera trace, and, more importantly, optics devices can often be difficult to set up and align, so systematic error can occur, and the redundancy in the trace serves as a check that that isn't occurring. The sea of zeroes is even more redundancy.
To what I responded with:
I realize that you are producing more data, however, your answer to question 3 seems to indicate that you can use less and that you can use a random set which would have some of the hallmarks of CS.
Thanks Rick.

Credit: NASA / JHUAPL / CIW. Earth and the Moon from much closer to the Sun. See those two bright dots? That's the double planet of Earth and the Moon. MESSENGER captured this view of its birthplace on May 6, 2010, during a search for vulcanoids, asteroids that are theorized to reside between Earth and the Sun. No vulcanoid has yet been discovered, but searches continue.

No comments:

Printfriendly