- Update: this entry is not about a paper rejection. I personally do not have a bone in this and Bob is very clear on accepting the diverse comments of the reviewers. It's about pre- versus post-peer review publishing and how the community is certainly deprived of insights by the former and potentially enriched by the latter -
Bob provides the story behind the three rejections he went through for a particular paper of his. Go read it, I'll wait.
I am bothered by this account because some of the reviewer's comments are simply not honest. While a few of these reviews are very valid and lead to the paper's ultimate rejection, the power given to the rest of these so-so gatekeepers in the pre-publication journal system, eventually yield fewer insights for the community as a whole.
For the sake of disclosure, I know Bob but I haven't talked to him about this blog entry. I have, however, featured these results before on this blog. Why ? Not because I know Bob but because these results ought to be known widely. In particular, some of the results show in a spectacular way that SL0, a simple solver, that has encountered a similar series of rejection, beat down most of the algorithms published in the years after the availability of this solver. In a nice example of post publication peer review, the caveats for SL0 were mentioned here while some of the perversity of the pre-publication peer review system were exposed here as regards to the genesis for an improved version of that solver. In short, the noisy version of that solver never saw the light of the day because again the related paper never made it through pre-publication peer review.
- The comments of reviewer 3 for the second paper provide both a deep insight and a fair criticism of the paper. The reviewer ought to be commended for this thorough job.
- The comments of reviewer 2 for the third paper are fair.
- The comments of reviewer 3 for the third paper are fair except for:
"....- It is very difficult to get any insight from the results or generalize to other distributions.
- There are few conclusions beyond "it (sometimes significantly) depends on the distribution" (and perhaps "forget about ROMP"). .."
I think this is the point of the paper: there are, indeed, few conclusions.
In all these cases of fair review (some probably asking for rejection), I am absolutely sure that the reviewers would not mind being recognized for the work they did as they provided both insight and valuable direction for the future version of this paper.
And now we get on the bad side of the gate-keeping enabled by the pre-publication peer-review process:
- With regards to the comments of reviewer 1 for the first paper, my question is simple: What the hell is wrong with you ? there are enough figures and the point is well made.
- With regards to the comments of reviewer 2 for the first paper: From the conclusion:
"... Conclusion:As a conclusion, this paper considers the well-known observation that prior signal information affects the performance of the recovery algorithm used; but, apart from giving some illustrative examples, it does not provide any analysis about the reasons beneath this behavior. This work is slightly novel and needs more experimental data and a more elaborate analysis of the results. ...."
The onus is on you as a reviewer to show the author, where and in what publication those well known observations have been documented ? As far as I know this is really the first one. If you don't have enough time to review a paper, please make time or let the editor know about your unavailability. She/he'll thank you for your honesty.
- Reviewer 1 for the second paper
"...The paper attempts to compare the signal recovery performance of 15 existing techniques. The codes used in this paper are mostly downloaded from the publicly available website. From this aspect, the novelty of the work is very minor...."
Here is what gets to me, all these algorithms depend on several parameters. Using these "off-the-shelf" solvers and exploring how those parameters change the behavior of the solver is a way to explore the phase space of possibilities. The novelty is the exploration of the phase space not the fact that multi-parameter algorithms exist and/or are publicly available. Continuing further we have:
"....What is more important is that no explicit conclusions have been reached after the comparisons. Some observations, such as "I do not know at this time what causes this behavior", are not acceptable. This makes the comparisons less convincing throughout the paper. The author needs go through these observations carefully to make sure the comparisons are correctly done. Also, how does the choice of different parameters affect the results...."
So papers with "I don't know" are not acceptable ? uh ? Thanks for the insight.
- The comments of reviewer 2 for the second paper is initially fair:
"....The transition phase of Basis Pursuit for Gaussian or USE matrices is theoretically known for years and studied especially by Donoho. It was proved that there exists a limit ratio s/m depending on m/N ensuring recovery of all (or most) sparse vectors. The existence of such a limit ratio s/m depending on m/N ensuring recovery (more precisely it is an asymptotic result) is not obvious but it is proved for BP. Author generalizes this result to all other algorithms. There is no reason to think that such a limit ratio is a function of m/N. It may depend on m and N but not only on the ratio. Author makes the assumption is true and since only one value of N is used there is no way to know if it is correct or not..."
but I absolutely do not agree with what follows:
"... Even 2 or 3 values of N were tried, it would be not sufficient to guarantee the correctness of this strong assumption..."
EURASIP Journal on Advances in Signal Processing is not a math journal, yes we need to get rid of any non-asymptotics behavior but this has to be done within the context of reasonable values of N especially for a signal processing journal. Eventually even the computations of Donoho et al rely on finite values of N.
- The comments of reviewer 4 for the third paper elicit the same feedback as that for the reviewer 2 for the first paper:, when writing
"...The conclusion is not significant in the sense that it has been a common sense in the literature..."
a competent reviewer ought to point to a specific instance of this "common sense in the literature", Good luck as there is none.
I think we can all deal competently with the noise in a post publication peer review process and do away with this closed process, what are we waiting for ?
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.