If you register with the site, you can upload your own images and see the results either in some Quicktime format or in VRML which can be viewed using the Cortona plugin for Windows, and this extension for Linux.
In order to push a little bit the boundaries of the algorithm, I specifically used imagery that is a little "weird" where the background has very sharp discontinuities: Planetary exploration.
The landscape as seen from Huygens when it landed on Titan is here:
Not bad for a single view (the Huygens probe survived only 30 minutes on Titan as expected and did not move) that no human has ever seen.
I am less impressed by the 3-D view rendered from the Mars rover Opportunity taken a week ago.
(the VRML can be seen here)
Some of the very sharp contrast images from Cassini looking a Phoebe did not converge.
But the beautiful view from the (amateur) HALO flight 2 is very interesting.
When using the 3D viewer, it looks like you are flying over clouds.
You can try it here.
Indoor scenes seem to provide also some good estimate, starting with the original shot:
while the 3D scene produces this:
one can view it with the 3D viewer.
I am half surprised by some of these results, if one recalls how the technique works, for outdoor photography, haze is an important part of the process that allows the method to differentiate between different depths. And so I am only half surprised that it does not converge for Phoebe (no atmosphere) but somewhat well on Mars (low atmosphere) and well on Titan (with an atmosphere) and on Earth at 28 km altitude looking at clouds and indoors.
Credit: ESA/NASA/JPL/University of Arizona/Alexei Karpenko