Pages

Sunday, November 27, 2011

Tim Gowers' Model of Mathematical Publishing

Peter Krautzberger at Mathblogging provides a timeline of the recent flurry of blog entries on the issue of publication. Looks like my proposal of stacking a StackExchange clone on top of Arxiv is exactly the proposition Tim Gowers made ten days earlier. I had not read Tim's entry until yesterday. Looking at the feedback on his blog and to a lesser extent here, the proposal has hit a resonance, and not just in the mathematics community. The new aspect with the applied math/engineering community is this idea that reproducible research is by default enforced in this type of system. A no small feat. Here are the main points of Tim's proposal:


After that discussion, let me collect together what I see as the main features of this hypothetical paper-evaluation site.
  1. You post your papers on the arXiv and create pages for them on the site.
  2. People can respond to your papers. Responses can range from smallish comments to detailed descriptions and evaluations (the latter being quite similar to referees’ reports as they are now).
  3. Responses can be written under your own name or under a pseudonym.
  4. You can accrue reputation as a result of responses of either kind, but your pseudonym will have the reputation disguised enough to maintain your anonymity.
  5. Negative language is strongly discouraged. If a paper is uninteresting, it simply doesn’t attract much interest. If it is incorrect, one says so politely.
  6. There is a reputation premium for evaluating papers that have spent a long time not evaluated. (There would be a way of finding these: for instance, you could list all the unreviewed papers in a certain area or subarea in chronological order.)
  7. If you are not registered for the site, or if you are registered but had very few reputation points, then people know that you are not doing your bit for the mathematical community when it comes to the important task of evaluating the output of others. Conversely, if you have a high reputation, then people know that you are pulling your weight.
I think there is something also to be said about the versioning of papers.



3 comments:

  1. I'm actually quite unhappy with the centralized aspects and the tendency to replace one monoculture with another.

    The Accidental Mathematician pointed out a lot of interesting problems. I think we need to diversify both what kind of work we consider useful (refereeing, exposition, community service) and how we make that work public. The more systems, the less chances of gaming all of them.

    For the paper problem: I think an aggregation platform would be more flexible; researchblogging.org is an interesting project to consider.

    ReplyDelete
  2. My point is really that peer review as currently implemented just does not make sense. I don't really care that much about publication. I take the view of Claerbout via Donoho that papers are really "advertising of the scholarship" [1] not the scholarship itself.

    The issue at the heart of these series of entries is really connected to the issue of whether any of these papers have merit beyond the one or two, necessarily not specialists, reviewers used by different journals. To take the example of SL0, I have shown that the solver could have been fitted for noisy cases but couldn't because of the inability to publish on time the first paper on it.

    SL0 is a beautiful case, because starting in 2008, it should have been taken as a gold standard against which new solvers had to be tried.

    An open peer review process would have made sure that this one solver would have gathered a whole lot more momentum and modification to go beyond its initial capabilities.


    I realize that I am taking this one example too far probably, but my point is that at the very least in compressive sensing, nobody has a clear view of what works well and what doesn't work at all.

    Evidently, if a system were to put on top of arxiv, then it could eventually take all kinds of input starting from blog and other stream of information. Hence I don't worry much about aggregation or centralization, rather, I feel we can't spend the next five years studying another failing greedy algorithm because it has a new name.

    Beyond compressive sensing, just take a look at the generic field of machine learning focused on certain matrix factorization, and just look a the number of sub algorihtm taking a new name to implement a new NMF.

    An open peer review system would change that and the landscape of research at the same time.


    Igor.



    [1] http://www-stat.stanford.edu/~donoho/Reports/2008/15YrsReproResch-20080426.pdf

    ReplyDelete
  3. I agree! (well, with the parts I understand -- the examples are a tiny bit outside my research area ;))

    As you can guess, I'm a big fan of blogs or more generally speaking, professional homepages and decentralized social networking tools. If every researcher wrote a blog post on each preprint, allowing trackbacks and pingbacks, then we would already be half-way there.

    Would it be silly to simply start a specialized feed aggregator? Rather than waiting for a platform to arise and convince people, it would allow us to get started right now with the tools that many people are already using.

    researchblogging.org has a good basic functionality to look to already. It needs more flexibility (i.e., allow preprints, blogposts, talks etc) and more features (trackbacks, discussions on blogs and more meta-data like "list of corrections"). But it would be a start.

    ReplyDelete