Saturday, April 26, 2014

What Could Possibly Go Wrong ? Peer-Review, Trolls, Data Breaches

Andrew just wrote about me being trolled on his blog a while ago. Go read it, it is here: Sleazy sock puppet can’t stop spamming our discussion of compressed sensing and promoting the work of Xiteng Liu.

While re-reading that fateful blog entry and the attendant comments, I could not escape thinking about another recent discussion that is currently going on at Stack Overflow: Why is Stack Overflow so negative of late? In short, good constructive discussions can be polluted very fast by very few and can essentially destroy the good will of the rest of the community. 

This led me pondering about the current pre-publication peer-review system.  We all agree there should be some peer review, but why constrain it to a one shot review that might include uneducated or, worse, dishonest people ? Sure, we all think of ourselves as fair and knowledgeable when we agree to do a review. However since most pre-publication peer review is anonymous, anybody can become your 'peer'. It's not just a figment of my imagination. For some editors, it might be sufficient to give a talk in compressive sensing at a meeting to become a de-facto specialist of the field. What could possibly go wrong with that ? Here is what could go wrong, check the S10 section of this conference. Sooner or later, the system is becoming less robust because of a few. 

The other problem I have with anonymity of the current pre-publication peer review system is that many people participating in it are deliberately giving away information about themselves (and the review they gave) to for-profit entities that have no compelling interest to protect it. What could possibly go wrong ? It's not like data breaches are that common anyway. Remember folks, it's not a question of 'if' but a question of 'when'.

3 comments:

James Prichard said...

"some editors, it might be sufficient to give a talk in compressive sensing at a meeting to become a de-facto specialist of the field. What could possibly go wrong with that ?"

Those talks appear aimed at the Medical profession explaining applications of the general technology.

Is your point that such a talk would "count" for a hypothetical journal editor to be evidence of expertise in developing new non-convex optimisation algorithms to apply to important problems in compressed sensing?

Peer review pre-publication protects the reputation of a journal (if it has one) and done well may improve the exposition of ideas. But the worth of a particular paper can't be judged until long after it is published.

Are you really suggesting the current system prevents good papers getting published due to dogmatic reviews or personal vendettas?

Igor said...

Jay,

1- Irrespective to the audience, the author in question should not be in a session on Compressed Sensing. That person has shown time and again a total misunderstanding of what Compressive sensing entailed.

2- most non-convex algorithm are mostly relying on heuristics and to a certain extent it is still the wild wild west, so yes, I can definitely believe an editor could ask this type pf person about some algorithm that has very little theoretical justification behind it (let me say that quite a few of those algorithms are capable of better results than more theoretically grounded convex optimization algorithms).

I have been asked to review papers yet I have never published in that area yet. I have accepted only one such review because it was an open review, i.e. When the paper is out, my review will be added to the paper with my name.

3- Yes. The current system prevents good papers from being published and noticed because of the two reasons you mentionned. Please note I did not say those were the only reasons nor did I say they were the most prominent. There are many other good reasons pre publication peer review is bad. One of them is literally the near impossibility by current journal for dealing with bad papers after publication. Another is the inability for certain editors to go beyond their own filter bubbles.

Igor.

kvm said...

"but why constrain it to a one shot review that might include uneducated or, worse, dishonest people? "

True. What about cases where the editor indeed sends the paper to an educated reviewer, say a grad adviser, but the review ends up being delegated to adviser's not-so-well-informed students? Sometimes, the authors can spot such ghost reviews and point to the editor that the reviewer was not well-informed. But often a negative review from an ignorant reviewer ends up wasting everyone's time and resources.

Printfriendly