List's column describes an experiment in which he compared conventional pre-publication reviewing with what he and Höfler call "selected crowd-sourced peer review" of the same papers:
I am not proposing what is sometimes referred to as crowdsourced reviewing, in which anyone can comment on an openly posted manuscript. I believe that anonymous feedback is more candid, and that confidential submissions give authors space to decide how to revise and publish their work. I envisioned instead a protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments. This, I reasoned, would lead to faster, more-informed editorial decisions.The experiment worked like this. They:
recruited just over 100 highly qualified referees, mostly suggested by our editorial board. We worked with an IT start-up company to create a closed online forum and sought authors’ permission to have their submissions assessed in this way. Conventional peer reviewers evaluated the same manuscripts in parallel. After an editorial decision was made, authors received reports both from the crowd discussion and from the conventional reviewers. ... we put up two manuscripts simultaneously and gave the crowd 72 hours to respond.The results were encouraging:
Each paper received dozens of comments that our editors considered informative. Taken together, responses from the crowd showed at least as much attention to fine details, including supporting information outside the main article, as did those from conventional reviewers. ... So far, we have tried crowd reviewing with ten manuscripts. In all cases, the response was more than enough to enable a fair and rapid editorial decision. Compared with our control experiments, we found that the crowd was much faster (days versus months), and collectively provided more-comprehensive feedback.The authors liked the new process. They plan to switch their journal to it, tweaking it as they gain experience.
As I've been saying since the first post to this blog more than a decade ago, conventional pre-publication review is long overdue for a revolution. Chris Lee's Ars Technica piece is well worth reading. He describes List and Höfler's experiment in the context of a broader discussion of the problems of conventional per-publication peer review:
The utter randomness of peer review is frustrating for everyone. Papers get delayed, editors get frustrated, the responsible reviewers get overloaded. Even when everyone is trying their best, any set of three reviewers can disagree so wholeheartedly about your work that the editor has to actually think about a decision—something no editor ever wants to be faced with.But, more interestingly, Lee looks at the peer-review process from a signal-processing viewpoint:
I'd suggest that there is a physical analog to traditional peer review, called noise. Noise is not just a constant background that must be overcome. Noise is also generated by the very process that creates a signal. The difference is how the amplitude of noise grows compared to the amplitude of signal. For very low-amplitude signals, all you measure is noise, while for very high-intensity signals, the noise is vanishingly small compared to the signal, even though it's huge compared to the noise of the low-amplitude signal. Our esteemed peers, I would argue, are somewhat random in their response, but weighted toward objectivity. Using this inappropriate physics model, a review conducted by four reviewers can be expected (on average) to contain two responses that are, basically, noise. By contrast, a review by 100 reviewers may only have 30 responses that are noise.It might seem that this simply multiplies the work demanded from already-overloaded reviewers. But Lee ingeniously and credibly argues that this isn't the case. This argument is the best part of the piece, and I urge you to stop reading me and start reading Lee.
Spoiler:
For those of you too lazy to take my advice, here is the Cliff Notes version. A larger number of reviewers brings a broader range of expertise to the review. Since they can all see each other's contributions, each reviewer can focus on the part that matches their specific expertise, and thus avoid the need to cover areas with which they are less familiar, and which thus involve more work in validating references, using less familiar analysis tools, etc.
Angela Cochrane's Should We Stop with the Commenting Already? asserts that:
ReplyDelete"Crowdsourced peer review = post publication peer review = online commenting"
Well, List and Höfler's model isn't post-publication and it isn't online commenting. Faculty of 1000's model distinguishes between reviews and comments. So the assertion isn't exact. Both models solicit reviews.
On the other hand, Cochrane's analysis does show that "online commenting" on scientific articles in the sense of unsolicited comments is not effective.
At the South China Morning Post, Stephen Chen's The million-dollar question in China’s relentless academic paper chase takes off from the US$2M award to a team from Sichuan Agricultural University for a paper in Cell to look at the broader issue of:
ReplyDelete"China’s “cult-like” paper chase and the rewards that go along with it."
In this case the award might actually make sense. The paper describes:
"a genetic variant in rice that could help the crop resist rice blast, a fungus that cuts the country’s output by about 3 million tonnes a year. Since the gene occurs naturally in rice, the researchers found that existing species could be tweaked safely and quickly to acquire the trait, passing it on to future generations."
so it is clearly important. But:
"it was the publication – rather than the research itself – that prompted the university to give Chen’s team a 13 million yuan (HK$ 15 million) reward, the biggest ever to a Chinese team for a paper in an international journal."
However:
"most of the money would go to Chen’s laboratory as research funds over five years. The team members would also receive 500,000 yuan in cash."
Providing resources to continue this team's research makes sense.
Who is Actually Harmed by Predatory Publishers? by Martin Paul Eve and Ernesto Priego argues, as I have for more than a decade that:
ReplyDelete"established publishers have a strong motivation to hype claims of predation as damaging to the scholarly and scientific endeavour while noting that, in fact, systems of peer review are themselves already acknowledged as deeply flawed."
Hat tip to Jerri-Lynn Scofield at naked capitalism.