The other day, I had a good discussion with Pedro on the underlying mechanics of the Caritags:) application and I now have a better technical understanding what it entails to provide a benchmark-like capability on the cloud like the one set up by the Mathworks for the Matlab programming contest. I say this because of the comments in the last entry and that, yes, we ought to do something about benchmarks. From the comments:
Vlad (Tony Vladusich) first wrote the following:
Hi Igor,
I also ran the algorithm against mine for comparison. My code seemed slightly faster but performed worse. So finally we have a CS algorithm that defeats Laplace's equation!
Vladirator10:
results: 26853807
time: 99.8700
solver_BCS_SPL:
results: 24931826
time: 112.6600
Eric Trammel then responded with:
A note about the interpolation algorithms:The Vlad then wrote:
For a given number of measurements (query limit), and under a strict time limit, interpolation seems like a very good approach. However, when decoding time becomes non-mission critical, and instead the focus is the quality of recovery, I think more sophisticated CS algorithms can provide more accurate results (given enough time, that is).
With more time/iterations, the CS algorithm should be able to tease more information out of those measurements than a simple interpolation. I know for Sungkwang's code (and Roberts), in order to get under the time limit, the algorithm had to be severely handicapped.
Igor,To which Eric responded with:
Is there any benchmark database in the CS literature against which to test algorithms? If not, establishing such a database would seem a reasonable step in standardizing performance estimates. My primary interest was from a biological/computer vision perspective: how do CS methods compare to conventional reconstruction algorithms for under-sampled images (m samples much less than n pixels)? In the human retina, for instance, photoreceptor density varies dramatically with eccentricity and there are 'holes' without any receptors at all. Yet our brains manage to fill-in these gaps, by means of some form of interpolation scheme. As the visual cortex is known to contain a wavelet-like representation of the visual field, the application of CS methods to this problem might seem reasonable at first glance. Aside from reconstruction fidelity, however, a serious constraint for real-time vision systems is processing speed. Solving an iterative L1 optimization problem might therefore prove to be prohibitively slow. More generally, the idea that the brain solves optimization problems in real-time seems a little unrealistic, at least with our current knowledge of brain function.
Vlad,
Right now I'm sitting on the code to auto generate new test-sets from image databases in the same manner as was used for the MathWorks contest. I just haven't figured out what to do with it yet, but I do have some ideas for setting up a system to handle future submissions in kind of the same manner.
Currently, there are no "standards" for CS on images, unless you count the standard image processing ones (lenna, barbara, cameraman...).
Other entries on the subject that might be of interest to this discussion include:
- Promoting a Larger Set of Compressive Sensing Benchmarks
- Let us define the Current State of the Art, shall We ?
If you have an idea about what should be done, leave a comment at the end of this entry and let's try to build something that'll last without spending too much time on it.
As an echo to a previous entry on this blog, Bob Sturm wrote the following entry: Paper of the Day (Po'D): Probabilistic Matching Pursuit Edition. I like his style.
Other blog entries include:
As an echo to a previous entry on this blog, Bob Sturm wrote the following entry: Paper of the Day (Po'D): Probabilistic Matching Pursuit Edition. I like his style.
Other blog entries include:
- A Reader Question!
- An Oxymoron of Sparse Proportions?
- Discovery of the "other side" of the discrete Fourier transform
- Comparison of Model Orders for OMP and LoCOMP with Interference Adaptation
- Paper of the Day (Po'D): Enhancing Sparsity in Linear Prediction by Iteratively Reweighted
1 -norm Minimization Edition from there:The procedure continues until we have converged to a satisfactory set of weights. Candès et al. show that this approach can lead to better solutions with fewer measurements (in a compressed sensing framework), which I will believe more thoroughly when I read it more thoroughly. For now, I am confused why we should want to perform several
1 minimizations when we could just perform one. We are not dealing with an underdetermined system. (What does that weighting dooooo? The authors do make a helpful comment about the roll of the weighting matrices: they discourage the small prediction weights (``go away''), and encourage the big ones (``stay a while''), which reminds me of the US game show ``The Biggest Loser.'') Time-domain distribution of atoms for LoCOMP
- Some Experiments with Low-complexity Orthogonal Matching Pursuit
Gonzalo Vazquez-Vilar wrote about Cognitive radio evolution.
and then there is:
- Aggregation of estimators, sparsity in high dimension and computational feasibility by Jean-Yves Audibert on John Langford's blog
- Hirsch Conjecture disproved , The Shape of Shape Analysis Research: Part I, Choosing the number of clusters III: Phase Transitions by Suresh
- Parallel transport on the Stiefel manifold by Alex
- Suzuki Groups as expanders by Terry Tao
No comments:
Post a Comment