*- Update:*

__this entry is not about a paper rejection__. I personally do not have a bone in this and__Bob is very clear on accepting the diverse comments of the reviewers.__It's about pre- versus post-peer review publishing and how the community is certainly deprived of insights by the former and potentially enriched by the latter -Bob provides the story behind the three rejections he went through for a particular paper of his. Go read it, I'll wait.

I am bothered by this account because some of the reviewer's comments are simply not honest. While a few of these reviews are very valid and lead to the paper's ultimate rejection, the power given to the rest of these so-so gatekeepers in the pre-publication journal system, eventually yield fewer insights for the community as a whole.

For the sake of disclosure, I know Bob but I haven't talked to him about this blog entry. I have, however, featured these results before on this blog. Why ? Not because I know Bob but because these results

__ought to be known__. In particular, some of the results show in a spectacular way that SL0, a simple solver, that has encountered a similar series of rejection, beat down most of the algorithms published in the years after the availability of this solver. In a nice example of post publication peer review, the caveats for SL0 were mentioned here while some of the perversity of the pre-publication peer review system were exposed here as regards to the genesis for an improved version of that solver. In short, the noisy version of that solver never saw the light of the day because again the related paper never made it through pre-publication peer review.*widely*- The comments of reviewer 3 for the second paper provide both a deep insight and a fair criticism of the paper. The reviewer ought to be commended for this thorough job.

- The comments of reviewer 2 for the third paper are fair.

- The comments of reviewer 3 for the third paper are fair except for:

"....- It is very difficult to get any insight from the results or generalize to other distributions.

- There are few conclusions beyond "it (sometimes significantly) depends on the distribution" (and perhaps "forget about ROMP"). .."

I think this is the point of the paper: there are, indeed, few conclusions.

In all these cases of fair review (some probably asking for rejection), I am absolutely sure that the reviewers would not mind being recognized for the work they did as they provided both insight and valuable direction for the future version of this paper.

And now we get on the bad side of the gate-keeping enabled by the pre-publication peer-review process:

- With regards to the comments of reviewer 1 for the first paper, my question is simple:
*What the hell is wrong with you ?*there are enough figures and the point is well made.

- With regards to the comments of reviewer 2 for the first paper: From the conclusion:

"... Conclusion:As a conclusion, this paper considers the well-known observation that prior signal information affects the performance of the recovery algorithm used; but, apart from giving some illustrative examples, it does not provide any analysis about the reasons beneath this behavior. This work is slightly novel and needs more experimental data and a more elaborate analysis of the results. ...."

The onus is on you as a reviewer to show the author,those well known observations have been documented ? As far as I know this is really the first one. If you don't have enough time to review a paper, please make time or let the editor know about your unavailability. She/he'll thank you for your honesty.where and in what publication

- Reviewer 1 for the second paper

"...The paper attempts to compare the signal recovery performance of 15 existing techniques. The codes used in this paper are mostly downloaded from the publicly available website. From this aspect, the novelty of the work is very minor...."

Here is what gets to me, all these algorithms depend on several parameters. Using these "off-the-shelf" solvers and exploring how those parameters change the behavior of the solver is a way to explore the phase space of possibilities. The novelty is the exploration of the phase space not the fact that multi-parameter algorithms exist and/or are publicly available. Continuing further we have:

"....What is more important is that no explicit conclusions have been reached after the comparisons. Some observations, such as "I do not know at this time what causes this behavior", are not acceptable. This makes the comparisons less convincing throughout the paper. The author needs go through these observations carefully to make sure the comparisons are correctly done. Also, how does the choice of different parameters affect the results...."

So papers with "I don't know" are not acceptable ? uh ? Thanks for the insight.

- The comments of reviewer 2 for the second paper is initially fair:

"....The transition phase of Basis Pursuit for Gaussian or USE matrices is theoretically known for years and studied especially by Donoho. It was proved that there exists a limit ratio s/m depending on m/N ensuring recovery of all (or most) sparse vectors. The existence of such a limit ratio s/m depending on m/N ensuring recovery (more precisely it is an asymptotic result) is not obvious but it is proved for BP. Author generalizes this result to all other algorithms. There is no reason to think that such a limit ratio is a function of m/N. It may depend on m and N but not only on the ratio. Author makes the assumption is true and since only one value of N is used there is no way to know if it is correct or not..."

but I absolutely do not agree with what follows:

"... Even 2 or 3 values of N were tried, it would be not sufficient to guarantee the correctness of this strong assumption..."

EURASIP Journal on Advances in Signal Processing isa math journal, yes we need to get rid of any non-asymptotics behavior but this has to be done within the context of reasonable values of N especially for anotsignal processing journal.Eventually even the computations of Donoho et al rely on finite values of N.

- The comments of reviewer 4 for the third paper elicit the same feedback as that for the reviewer 2 for the first paper:, when writing

"...The conclusion is not significant in the sense that it has been a common sense in the literature..."

a competent reviewer ought to point to a specific instance of this "common sense in the literature", Good luck as there is none.

I think we can all deal competently with the noise in a post publication peer review process and do away with this closed process, what are we waiting for ?

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 3 comments:

I am a regular reader and comment now and then here. I'm going anonymous for obvious reasons :)

Here's my opinion of compressive sensing. I barely have my foot in the door with a few papers, but from this episode and other hearsay I have a couple of observations:

Compressive sensing is a fairly new field, in spite of the fact that there are ~10^3 papers (and counting in exponential increments). In spite of the many papers, the fact remains that many aspects of compressive sensing are not as well understood as they ought to be if we were to apply it widely. The AMP formalism, as I see it is probably the best approach to deriving a closed form for the distribution of the error of recovery, and in my opinion, it wonderfully paves the way for compressive sensing applications. Yet, the mathematical challenges are significant. For instance. At a meta-level, the mathematical approach to establishing a general theory of AMP (and/or the development of competing approaches which yield results as good as convex optimization) is a long term project.

Given this, I think the community can possibly be more open minded about computational approaches to analyzing the behavior of compressive recovery. Unlike what some of the comments on Bob's paper seem to indicate, researchers in this area should appreciate there need not be any set template (folk knowledge, etc.!) to follow for studying the behavior of compressive recovery. Paper's should be evaluated for their rigor and correctness rather than for the significance. At the end of the day, taking a meta- perspective, signal processing consists of devising convenient and accurate mathematical models for signals and systems and understanding, analyzing, and developing algorithms conventional 'processing' tasks.

We know quite well that conventional least-squares-type analysis doesn't directly apply to compressive sensing. So to an extent, by choosing to work with compressive sensing, one already discards the need to conform to what is sometimes touted as 'conventional'. Insisting on authors adopting specific approaches only, and nothing else, especially in a nascent field such as compressive sensing, can have deleterious effects in the long run:

0. It makes the assumption that there ought to be a general one-size-fits-all approach to analyzing compressive signal recovery. There may be a general overarching theory that's convenient, reliable, and accurate in describing compressive signal recovery. But as a community we shouldn't assume the existence of such a method. The state of the art hardly contains such a method either.

1. It can discourage the development of novel approaches to analyzing compressive sensing.

2. It can discourage folks from tailoring approaches to analyzing and designing compressive recovery for their specific applications at hand.

x-posting on Bob's blog, just in case its more relevant there.

Thanks Compressive_Sensing_Romantic

Igor

Post a Comment