Following up on To Serve Science, we all have egregious examples of how badly peer review can go. Here an instance of obvious innovation stifling/ A while back I mentioned that Tianyi Zhou would be releasing an implementation of the 1-bit Hamming Compressed Sensing algorithm mentioned in Hamming Compressed Sensing by him and Dacheng Tao. It's been a while now so I asked him about the release. Here is what Tianyi kindly responded ( he Ok'ed making this e-mail public)
......I am afraid the the release of Hamming Compressed Sensing code will be delayed.
I already prepared the MatLab code and the implementation was ready. We believed the Hamming Compressed Sensing paper would be accepted by a conference then, and planned to publish the code when the paper is accepted. Unfortunately it was rejected at last.
The reasons for rejection are somehow disappointing: 2 out of the 3 reviewers understand the main ideas of the paper and both believe the work is interesting, novel, quite practical and of clear interests in digital systems. They suggest the clarity of certain parts of the paper can be improved. The rest reviewer cannot accept that each column of the measurement matrix is uniformly drawn from a unit sphere, and does not think the unit L2 norm constraint to the signal is reasonable thus gave [us a] reject. Although we took our best efforts to let the area chair and the 3rd reviewer know that the unit L2 norm constraint and the random vectors on the unit sphere are necessary for 1-bit measurements, it is finally rejected with an average score 5.4/10.....
I am looking at the meta question as I have absolutely no stake here besides that of the public interest. In these computational experiments, how can a reviewer provide any type of useful input for the process of judging if they do not have an implementation in hand ? Clearly any competent reviewer worth his salt would be capable of changing the distribution from which the elements of the measurement matrix are drawn and figure out that it does or does not work. If the criticism is that the distribution is not general enough then I could understand the point but the reviewer seems to insist a point that is not true and that he could not possibly check. By blocking a potentially valuable algorithm, how is this pre-publication peer review process helping Science ? Just asking.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.