Negative reviews should be the exception not the norm. Why ? Take for example the recent press release by MIT on compressive sensing. (Toward practical compressed sensing). In particular, I am a little annoyed by this statement:
"...But it’s been slow to catch on commercially, in part because of a general skepticism that sophisticated math ever works as well in practice as it does in theory...."
Sophisticated math ? No, the real issue ? compressive sensing is simply not a mature field and if there is one thing it is probably that the math is not sophisticated enough.
- How many people still ask questions about RIP as being an important consideration ?
- How many people still describe compressive sensing as an inpainting scheme ?
- How many people don't realize that the single pixel camera can be made to work in a raster mode ?
Too many if you ask me, I have even heard a story where a PI had to go through the extra mile to convince a postdoc to just try compressive sensing. In the end, based solely on reading papers, the postdoc was adamantly convinced that a Thikonov reconstruction was the best thing on earth. Here is an instance of another negative review described by Pierre Vandergheynst in a post on his Google+ stream which had a real impact on a potentially useful project:
"....Two years ago, I submitted a project to the Swiss SNF proposing to extend our early work on compressed sensing for sensing ECG for low power applications. The application was rejected based on a review that cited a paper, from MIT, where authors had "... mathematically proved that CS would never be a good alternative for low power applications... " of the type we were considering. Now another MIT paper proves just the opposite.The original paper was quite theoretical and making very strong assumptions on what hardware does. It was unrealistic but it's OK. The paper was studying asymptotic regimes. But in order to have a good impact I suppose it was making strong claims, and it was not backed up by any experiment. This paper also makes hypothesis and it is also OK. I'll tell you what is not OK. What is not OK is when reviewers scan papers, grant proposals at the level of PR and then singlehandedly destroy 6 or 9 months of serious work.In the end, our project was delayed. We filed and obtained a rather large EU grant, but had to make a lot of compromises. We will never explore low power CS for ECG in depth the way we proposed in the original grant unfortunately. Because it was "mathematically proved" to be a bad idea, before it was proved to be a good one .... ..."
I think he may refer to a specific result which might be part of that book.
The take away from this ? Published negative reviews provide a great mechanism for the weak minded gatekeepers be they your own postdocs or your average grant reviewers. Publish one at your own peril. Tomorrow, we'll feature a new paper that will bring down all the mental barriers that keep us from the only thing we should be doing: exploration. There, you will be able to read:
"...However, since the RIP-based techniques are well established, it is worth posing the following question: is the RIP relevant for imaging problems? ... It is our belief that the answer to this is no...In view of this, the third conclusion of our work is that the RIP is of limited value in analysing compressive imaging strategies."
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.