Thursday, November 24, 2011

The wrath of our discontent

On top of today's activities, here is some more food for thoughts. To Serve Science  points to the slowness of the peer review and publication process as currently implemented. Some undesirable side effects directly affect the quality of our future production. In short, it has become unfair to us and Science as a whole.

Putting StackOverFlow on top of Arxiv would seem "easy" to do especially with StackOverFlow clones and Arxiv providing the ability to use their metadata. The tagging of papers and therefore of communities would be enhanced by the tags/keywords provided by Arxiv. Based on my small experience with Nuit Blanche, the StackOverFlow functionality would have to be changed. In particular, let me wonder aloud some questions and important details:

  • Starting the process: The system unilaterally uploads the Arxiv metadata on the Q&A site and every question in that site becomes: "Is this paper worthy ?"
  • In Q&A sites, the author of a question can select one of the comments and promote it as the selected "Answer" among many comments. One cannot do that here since the uploading is automatic, i.e. the owner is not a person. I am thinking along the lines of Doron, the answer would be "Published in such and such Journal". This information would be provided by the authors with attendant bibtex and other niceties. In effect, the judgement of the editor of a publication would be based on how the discussion went on and the "points" of the different commenters.
  • Some reviewers want to remain anonymous, yet want to get some credit for providing useful input. The point system for every person should be dependent on whether they provided useful feedback (as given by the number of points they have gathered). I am not sure how to handle that part.
  • Editors of publications want to identify who is good in a certain area so they can trust the reviewers' judgement. The point system could be a good indicator. 
  • People can only provide comments (i.e. reviews). They can also upvote other's comments, but have to provide meaningful reasons for downvotes ?
  • Should everybody be anonymous ? but can still use their points to show knowledge ?
  • Who pays for this ? Reviewers pay ? Authors pay ? Wikipedia style ?.university libraries ? Publishers (one way is for them to pay for some indication of the quality of the review (points of reviewers in certain areas)? 

With a system like this one, the Question is "Is this paper worthy ?" and the Answer is "It is worthy for such and such journal" ? We need to address the "details" on the anonymization process and the point system.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.


Thomas Arildsen said...

I think the service should be partitioned into different theoretical areas, much like ArXiv but possibly with a more fine-grained selection of areas. When users vote reviews up or down, the reviewer's credits should only be affected in the theoretical area of the reviewed article. This way, reviewers can have different ratings in different areas. This will help assess in which area(s) a reviewer can be relied upon and could encourage reviewers to try contributing in other areas, that are not necessarily their expertise, without destroying their credibility in areas where they already have a good rating.

Igor said...


This is very interesting.


Igor said...

Also like Petros said the system should probably leverage the recent google citation service and the microsoft equivalent. At the very least some people should get points for having published in a specific field, so we ought to use this information to perfect the point system.