"Give me your tired, your poor, Your huddled masses yearning to breathe free " or so says the poet, it could be applied to all kinds of papers, even the ones you wrote and did not like.. Even though this is the only game in town, most people take a dim view of the filters enabled by the peer-review process. In fact we all really have a problem with pre-peer-review, i.e. have some potentially few incompetent persons decide the fate of a potentially revolutionary idea or algorithm that works. The best example of that I have, is the treatment given to SL0. An algorithm that has been undefeated in the noiseless sparse recovery case since 2006 until a few weeks ago.
The point of that figure is not that an SL0 paper got published in 2007 (a full year later after being rejected in 2006): post-peer-reviewer Bob Sturm shows that most algorithms devised later than 2007 still could not reach SL0's performances. When I mentioned this issue in Blowing Up the Peer-Review Bubble, I noted that
"I am also surprised that some papers do not start a bidding war between publishers after having landed on arxiv as preprints."
Turns out, some of you have already been hit by this. I think there is a good business model there and one I would be interested in investigating.
In a post peer review process, having a open access journal and code implementation from the authors is a must. In effect, reproducible research becomes an essential side effect of the post peer review process, not a "nice to have" feature.
Dealing with retractions is currently a very ad-hoc process and uneasy for publishers. How do you flag papers that have been retracted like the literally hundreds of papers in this case ? How do you allow for people that have referenced these retracted papers to remove the stain by proxy on their own papers ? How do you give credit to people like Keith Baggerly and Kevin Coombes, who spent a non negligeable amount of time in checking what turned out to be a faulty clinical trial. Keith Baggerly and Kevin Coombes did a post peer-review and the only thing they get out of it is a paper ? It's a little short if you ask me.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Hi Igor. Note that SL0 does not perform very well for sparse signals distributed Rademacher. In that case, all approaches are beat hands down by l1 minimization and AMP ... which is curious since SL0 is taking a majorization of the l0 norm. I wonder what would happen if we did "SL1"?
ReplyDeleteSL0 is too sensitive to noise. This limits its applications to many fields, like gene expression, network mining, sparse representation of natural signals (biosignals, etc), DOA estimation, ....
ReplyDeleteWe should not over-praise algorithms behaving better in noiseless scenarios. After all, sparsity based signal processing/machine learning covers many fields. And over-praising such an algorithm may mislead your readers with various background/applications.
Yes, I agree with the both of you (Bob and Anonymous and even a third person who wrote by email) but what you are saying is indeed what a post peer review would provide: Context.
ReplyDeleteIn the current process (i.e. pre - filter), the review simply does not provide this essential element for understanding How is a yea or nay binary process serving Science ?
"I am also surprised that some papers do not start a bidding war between publishers after having landed on arxiv as preprints."
ReplyDeleteNeat idea, but most journals demand serial monogamy. This may change with the huge number of (transient?) new online journals competing for papers. But we’re in a real quandary, because few journals can guarantee archiving in perpetuity. That’s a job for libraries, but their function in a paperless world now seems to be nothing but a gateway to online material requiring paid subscriptions. As no one archives the whole Internet, the retention of our scientific output is in great flux. Be careful what you wish for.