Thursday, May 06, 2010

CS: Matlab Programming Contest results

The Matlab programming contest featuring some compressive sensing example is over. It featured about 182 participants. In the previous entry comment's section on the matter, one can read this exchange:

Eric Trammel writes:

I was taking a look at Robert's code and one thing that I noticed was that he was using blocking to speed up the recovery process. This is a good approach, but I have a comment on the method.

The current implementation shown recovers every block independently. This does not take into account inter-block correlations. Taking a look at Lu Gan's work (Block Compressed Sensing) and Mun & Fowler's directional transform version, you can see a marked improvement when we:

A. Threshold over the entire signal rather than each block independently.

B. Apply a smoothing step (i.e. wiener filter) at each iteration to smooth out blocking artifacts.
Robert then goes on to say:
So, here is what I just typed into my text editor before reading Eric's comment :-)

(Summary: Given more time and effort CS solutions could have done better.)

"While I'm waiting for my final score (which I expect to be around 53500), I'd like to add a few comments on my algorithm to give an idea of the level of heuristic.

As mentioned before I processed the data blockwise, mainly because of the lack of a fast transform for the given measurement setup (non-weighted sums of pixels).
The algorithm is basic soft thresholding plus backprojection with decreasing threshold parameter. Degrees of freedom in this setup are #iterations, the sequence of decreasing thresholds and a regularization parameter for the backprojection - quite simple.

The main drawbacks of this approach are:
A. haar-wavelet basis (an overcomplete wavelet frame would probably have performed better)

B. no regularization across the block-borders (i.e. assembling transform matrixes for bigger areas)

C. no heuristic intermediate- or post-processing (smoothing, edge-detection, ...)

So, as "R" mentioned, even with non-adaptive sampling one could probably have reached the performance of adaptive solvers, using more advanced algorithms (SPGL1, SL0, ..?), fast transformations, additional heuristics tuned to the problemset and/or additional computation time.

Not quite as devastating as the other anonymous poster would make us believe."

Robert
The Vlad then mentioned

Well, my score for 'vladirator10' (solving Laplace's equation with Dirichlet boundary conditions supplied by random pixel sampling) was around 41874, still quite a bit better than the 53500 Anonymous was expecting. So, it would have been interesting if a CS algorithm could beat that. (Of course, we can still make comparisons by running the 'runcontest(1)' command that came with the 'sensor' package.) As my algorithm didn't involve any adaptive sampling, I think it's not unreasonable to compare these reconstruction methods (L1 versus Laplace's equation).
To what Robert replied:

Just for the record, my last attempt went somewhat off and so the best score remains 53556.2.

Its quite a bit ashaming (and I guess that was Anonymous poster's point) that simple (nonadaptive) linear interpolation like this early code get 36k-ish scores so effortlessly.

But then, the "world of compressed sensing" doesn't end at 8 iterations of naïve IST (like in my code).



The winner of the contest is Hannes Naude, congratulations to him. I note that his entries are called somewhat appropriately "Overfitting is a fool's game". His winning entry is here. Here is what he has to say about his entries in this excellent thread:


> Robert,
> I'm not an expert on this field, but the reading that I have done suggests that you actually want queries that are as random as possible (and hence entirely unlocalized). This runs counter to our usual intuition, but so does beating the Nyquist limit...

That's exactly right. Typical compressive sampling queries are just random masks. I experimented with some compressive sampling code in the darkness phase of the competition (from the L1 magic website as well as from David Wipf's homepage), but it wasn't competitive.

The reason why our simple block based solvers appear to outperform the cutting edge stuff from academia is the fact that the problem as stated here does not actually correspond exactly to the compressive sampling problem typically studied in the real world. In that case, one is not able to adjust your query pattern dynamically based on the results from earlier queries. Rather, the entire set of query masks need to be specified at the outset.

Being able to adjust our queries to zoom in on areas with detail is a significant (allthough not realistic) advantage and allows the custom developed code to outperform the off the shelf CS code by orders of magnitude.

Regards
Hannes

All in all it looks like a game theory instance than a benchmark :). As I have mentioned, it would seem to me that to discover the real interesting entries would be to re-run most of these entries with a new set of images and see how the ranking changes. Maybe we could this on the runtest function that was given in the example.

10 comments:

Eric Tramel said...

Actually, so long as the entries remain up there, it would be pretty easy to write a script to run a bunch of different solvers on a new test set. The new test set could be made using large scale crops, rotations, and noise on something like the Corel image database and then giving each test image a reasonable subsampling ratio.

Igor said...

Eric do you want to take a stab at it. I am sure it could be made a paper or something.

Eric Tramel said...

I'm actually downloading the databases, now :P

Igor said...

One small problem is that many of the entries close to the best score very much look like each other

Eric Tramel said...
This comment has been removed by the author.
Eric Tramel said...

Also, the main algorithm used for the most successful entries used a massive amount of recursion.

On my laptop, its impossible to run them without completely crashing out. You'd need a cluster, methinks (which I could try, but I'd rather not piss off our sysadmins if it breaks :P)

Anonymous said...

I think you're doing something wrong. It runs fine on my laptop. And the matlab contest machine is just a normal dual core. Nothing fancy. Are you running against the practice testsuite that was distributed?

The Vlad said...

I used the 'runcontest' function to run Nicholas Howe's Interpol code alongside my own solver (with modified front-end to sample uniformly, rather than randomly, as in Interpol). The performance of the two algorithms was similar, though Nicholas's code performed a little better

Interpol
results: 24963122
time: 52.4700

vladirator10(modified)
results: 27941056
time: 56.5600

So, Nicholas's was a very clever and efficient piece of code. Interestingly, the performance of my algorithm with random sampling was much poorer in terms of speed but only slightly worse in terms of reconstruction fidelity (i.e. similarity to image).

vladirator10(original)
results: 28908833
time: 113.8100

I guess this stands in contrast to L1 reconstruction approaches, where random sampling in the signal basis would be expected to be beneficial, at least from the viewpoint of reconstruction quality in a wavelet basis.

The Vlad said...

Oops...there was a bug in my code.

vladirator10(modified)
results: 26853807
time: 108.9400

So, still better than random sampling in terms of reconstruction fidelity, but comparably slow. But nowhere near the performance of Nicholas's code.

Eric Tramel said...

After running the results of the linear interpolation techniques, its pretty amazing how well the work for natural images...and a lot quicker. Pretty interesting!

Printfriendly