Pages

Friday, May 22, 2009

CS: Let us define the Current State of the Art, shall we ?


The day before yesterday's entry on Other People's Ideas , I wrote the following:
....From talking to some of you, there is also an element of weariness in traditional CS about the claims made on the solvers and other encoding approaches. For instance with regards to solvers, there is a sense that much parameter guessing goes into getting the results of the underlying publication. While, one way to avoid this situation is to make one's code available for people to use, another way is to probably make sure that the same benchmark exercise a la Lena is available to authors. Yet this is rarely performed even though Sparco is there for that purpose....
That seemed to have stricken a chord as some of you wondered if I talked about their solvers. But further, Arian Maleki reminded me that he tried exactly this in his presentation at SPARS 09. Except he didn't presented it as he did not get a visa on time. I am not a political person but I do not suffer lightly the incompetence of bureaucrats (those of you who know me know the colorful language I use in this type of situation). Especially in this case where a good presentation would essentially have enabled a much needed discussion on a subject of increasing urgency. But let me go back to what Arian wrote to me in response to the Other People's Idea entry:

....Actually the problem that you mentioned really exists in compressed sensing and in most of the algorithms that have been proposed so far there is a need for either some oracle information (like the sparsity) or a very precise tuning of parameters which might be either very difficult or even impossible in some cases. The theoretical guarantees of most of the algorithms are also either weak or are derived under the assumptions that the parameters are carefully tuned to satisfy certain properties and will not help the user in finding good estimates of the parameters. But, David Donoho and I proposed a method to avoid this problem and our work was published in SPARS 09.
We also applied our framework to several famous algorithms( like iterative soft\hard thresholding, CoSAMP, Subspace Pursuit and some variants, ...) and did some fair comparison among those. I believe whoever wants to use one of these algorithms, can easily realize from our paper which algorithm to use according to the application. If everyone who wants to publish an algorithm uses this framework we can easily see the state of the art and I am sure it will help the research community to improve their algorithms faster. If you are interested in reading our paper it is:
Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing

In the paper (written with Dave Donoho), one can read the following:
....We provide three versions of iterative algorithms based on our optimal tuning exercise: Recommended-IST, Recommended-IHT and Recommended-TST. They are implemented in Matlab and published at URL :
In our recommended versions, there are no free parameters. The user specifies only the matrix A and the left-hand side y. In particular the user does not specify the expected sparsity level, which in most applications cannot be considered known. These recommended algorithms are not the same as previously published algorithms. For example, Recommended TST has parameters = 1 and = 1, so it initially seems identical to Subspace Pursuit [8]. However, Subspace Pursuit demands an oracle to inform the user of the true underlying sparsity of the vector. Recommended-TST has embedded in it a maximum value for the assumed sparsity level (see Table III). If the actual sparsity in x0 is better than the maximin value, the algorithm still works, but if it is worse, the algorithm won’t work anyway. The user does not need to know this number – it is baked into the code. In effect, we have de-oracle-ized the Subspace Pursuit method....

As one of the average engineer looking to use reconstruction solvers as a commodity because most of my time is spent on fitting the physics to the encoding problem, I am also really looking forward the same type of treatment for the Reweighted, Bregman, Nesterov algorithms or any other solvers from the Zoo (see the remark by Dirk Lorenz in A projection proximal-point algorithm for l^1-minimization). As you recall I talked about this specific issue before when I wrote about two examples of issues found by others:

...In a different area, this is an issue of general interest: what are the parameters needed to make sure your reconstruction solver will converge to the right solution. It looks like there is not an obvious choice for the Bregman solvers as pointed by Michael Friedlander when he noted an issue with the parameters choice in this entry. More recently, an unknown blogger seems to have similar issues with a Bregman's algorithm as well. And so I am not overly surprised to see two papers at SPARS'09 (list of papers are here and here) trying to deal with a similar issue with the IHT algorithm:

I think one of the other issues with finding the right parameters for specific reconstruction algorithms is also to make sure that we also are dealing with the right domain specific dataset be it a vector or a natural image or else. A case in point is the recent paper by Jianwei Ma entitled Improved Iterative Curvelet Thresholding for Compressed Sensing where a new dictionnary is investigated on top of the usual canonical basis. As it stands, this really means that we are comtemplating a combinatorial explosion of possibilities and that Sparco needs to be part of the solution.As it turns out, Michael Friedlander one of the author of Sparco, also sent me an e-mail:
...Just a quick note to say that I read your blog's May 19th entry, and your exhortation for greater reproducibility in algorithmic work. I've always been grateful that you've pushed Sparco, but alas, I'll admit that its benchmark problems are rarely used in the literature.

Interestingly, the underlying linear-operator library is used much more often. In a few months we plan on releasing a separate package that contains only the operator library, but an expanded version with much more functionality. Of course we'll send word when it's released....
I had noted that indeed the linear operator library was indeed very nifty as it took away some of the difficulties underlying the implementation of masks and so forth. This is great news but let us come back to this issue of benchmarks. Ever since the revolution of reproducible research and the birth of compressive sensing, this is probably the first time we have a way to make sure that a field does not collapse under its own complexity. Can we rise to that challenge ? While this complexity may have some attraction for the experts, the average user is rapidly turned off and this may eventually backfire on the whole field. I am not sure what the next step should be, but here are some of the questions:

  • Should we set up something like the Standard Performance Evaluation Corporation ?
  • How would the choice of solvers' parameter be addressed or resolved ?
  • Should we have a script written that run through every benchmarks once a new solver is created and made available?
  • Should we have a procedure such as the one mentioned by Arian implemented so that a parameter set is chosen for all the benchmark cases ?
  • Should we have a hidden benchmark ( a la Netflix prize) so that parameter optimization cannot be performed ?
  • Where should the scripts and figure of merits be hosted ?

One more thing, I know I don't have much traction, but if you publish a paper that references Sparco, it will get a special treatment on this blog. I might even contact some of you on the Hardware side and ask you to provide a set of data you have obtained from your technology. I'll forward it directly to the Sparco team. Michael tells me he is going to continue adding new cases from the current literature.


Disclosure, I don't have a stake in Sparco.
Credit photo: Wikipedia, Can you fit the Mona Lisa in 140 Characters ?

No comments:

Post a Comment