Following up on To Serve Science, we all have egregious examples of how badly peer review can go. Here an instance of obvious innovation stifling/ A while back I mentioned that Tianyi Zhou would be releasing an implementation of the 1-bit Hamming Compressed Sensing algorithm mentioned in Hamming Compressed Sensing by him and Dacheng Tao. It's been a while now so I asked him about the release. Here is what Tianyi kindly responded ( he Ok'ed making this e-mail public)
Hi Igor,
......I am afraid the the release of Hamming Compressed Sensing code will be delayed.
I already prepared the MatLab code and the implementation was ready. We believed the Hamming Compressed Sensing paper would be accepted by a conference then, and planned to publish the code when the paper is accepted. Unfortunately it was rejected at last.
The reasons for rejection are somehow disappointing: 2 out of the 3 reviewers understand the main ideas of the paper and both believe the work is interesting, novel, quite practical and of clear interests in digital systems. They suggest the clarity of certain parts of the paper can be improved. The rest reviewer cannot accept that each column of the measurement matrix is uniformly drawn from a unit sphere, and does not think the unit L2 norm constraint to the signal is reasonable thus gave [us a] reject. Although we took our best efforts to let the area chair and the 3rd reviewer know that the unit L2 norm constraint and the random vectors on the unit sphere are necessary for 1-bit measurements, it is finally rejected with an average score 5.4/10.....
I am looking at the meta question as I have absolutely no stake here besides that of the public interest. In these computational experiments, how can a reviewer provide any type of useful input for the process of judging if they do not have an implementation in hand ? Clearly any competent reviewer worth his salt would be capable of changing the distribution from which the elements of the measurement matrix are drawn and figure out that it does or does not work. If the criticism is that the distribution is not general enough then I could understand the point but the reviewer seems to insist a point that is not true and that he could not possibly check. By blocking a potentially valuable algorithm, how is this pre-publication peer review process helping Science ? Just asking.
4 comments:
Aside from the issue of the fairness of the review (the reviewer's comment does sound strange), I am not sure how that would prevent release of the code. Since the paper is on arXiv already, releasing code that performs as advertised can only help in the acceptance of the paper to a journal (at least it cannot hurt).
On the review fairness, conferences are often hit or miss (an in the very selective ones it really matter if you are part of an accepted clique). That's why in journals you can argue in a rebuttal.
Tianyi tells me this they have decided to not release an implementation until the paper is accepted at some conference or journal. I can definitely see why they would feel that way.
Igor:
Have you read "Reinventing Discovery: The New Era of Networked Science", by Michael Nielsen ?
It discuss many of the issues you have mention before regarding the current way people do science today.
If you haven't look at it, I recommend you to read it. You will provably enjoy it.
I cannot understand why the unit L2 norm constraint on the signal is a problem. In my opinion, this is no different than the general assumption you have to make, when dealing with quantization, that you must know the input distribution (e.g., assuming it is zero-mean Gaussian and knowing its variance) to design the quantizer or at the very least scale your input signal optimally. This is not usually frowned upon for higher-rate quantization in the literature.
Post a Comment