Pages

Friday, May 14, 2010

CS: Benchmarks and Around the blog in 80 hours


The other day, I had a good discussion with Pedro on the underlying mechanics of the Caritags:) application and I now have a better technical understanding what it entails to provide a benchmark-like capability on the cloud like the one set up by the Mathworks for the Matlab programming contest. I say this because of the comments in the last entry and that, yes, we ought to do something about benchmarks. From the comments:

Vlad (Tony Vladusich) first wrote the following:
Hi Igor,

I also ran the algorithm against mine for comparison. My code seemed slightly faster but performed worse. So finally we have a CS algorithm that defeats Laplace's equation!

Vladirator10:

results: 26853807
time: 99.8700


solver_BCS_SPL:

results: 24931826
time: 112.6600

Eric Trammel then responded with:
A note about the interpolation algorithms:

For a given number of measurements (query limit), and under a strict time limit, interpolation seems like a very good approach. However, when decoding time becomes non-mission critical, and instead the focus is the quality of recovery, I think more sophisticated CS algorithms can provide more accurate results (given enough time, that is).

With more time/iterations, the CS algorithm should be able to tease more information out of those measurements than a simple interpolation. I know for Sungkwang's code (and Roberts), in order to get under the time limit, the algorithm had to be severely handicapped.

The Vlad then wrote:
Igor,

Is there any benchmark database in the CS literature against which to test algorithms? If not, establishing such a database would seem a reasonable step in standardizing performance estimates. My primary interest was from a biological/computer vision perspective: how do CS methods compare to conventional reconstruction algorithms for under-sampled images (m samples much less than n pixels)? In the human retina, for instance, photoreceptor density varies dramatically with eccentricity and there are 'holes' without any receptors at all. Yet our brains manage to fill-in these gaps, by means of some form of interpolation scheme. As the visual cortex is known to contain a wavelet-like representation of the visual field, the application of CS methods to this problem might seem reasonable at first glance. Aside from reconstruction fidelity, however, a serious constraint for real-time vision systems is processing speed. Solving an iterative L1 optimization problem might therefore prove to be prohibitively slow. More generally, the idea that the brain solves optimization problems in real-time seems a little unrealistic, at least with our current knowledge of brain function.

To which Eric responded with:
Vlad,

Right now I'm sitting on the code to auto generate new test-sets from image databases in the same manner as was used for the MathWorks contest. I just haven't figured out what to do with it yet, but I do have some ideas for setting up a system to handle future submissions in kind of the same manner.

Currently, there are no "standards" for CS on images, unless you count the standard image processing ones (lenna, barbara, cameraman...).

Other entries on the subject that might be of interest to this discussion include:
If you have an idea about what should be done, leave a comment at the end of this entry and let's try to build something that'll last without spending too much time on it.


As an echo to a previous entry on this blog, Bob Sturm wrote the following entry: Paper of the Day (Po'D): Probabilistic Matching Pursuit Edition. I like his style.

Other blog entries include:

Gonzalo Vazquez-Vilar wrote about Cognitive radio evolution.

and then there is:

Credit: Courtesy of SDO (NASA) and the [AIA, EVE, and/or AIA] consortium., The Sun today.

No comments:

Post a Comment