Two major study retractions in one month have left researchers wondering if the peer review process is broken.Below the fold I explain that the researchers who are only now "wondering if the peer review process is broken" must have been asleep for more than the last decade.
Retraction Watch has a detailed account of the two retractions. First The Lancet:
The Lancet paper, “Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis,” which relied on data from a private company calledSurgisphere and had concluded that hydroxychloroquine was linked to a higher risk of death among some COVID-19 patients, has been dogged by questions since its publication in late May. ... now three of the four authors of the article have decided to pull it entirely. The abstaining author, Sapan Desai, is the founder of Surgisphere,Second, NEJM:
The New England Journal of Medicine retraction followed a little more than an hour later, with Desai agreeing to the moveRabin writes:
The reputation of these journals rests in large part on vigorous peer review. But the process is opaque and fallible: Journals generally do not disclose who reviewed a study, what they found, how long it took or even when a manuscript was submitted. Dr. Horton and Dr. Rubin declined to provide those details regarding the retracted studies, as well."Long" is an understatement, and this isn't just about medical journals. In the very first post to this blog, more than 13 years ago, I summarized part of a fascinating paper by Harley et al. entitled The Influence of Academic Values on Scholarly Publication and Communication Practices thus:
Critics have long worried that the safeguards are cracking, and have called on medical journals to operate with greater transparency.
I'd like to focus on two aspects of the Harley et al paper:Since then, the oligopoly publishers have continued the brand-stretching process, and I've continued to observe it in, for example, 2011's What's Wrong With Research Communication, 2015's Stretching the "peer reviewed" brand until it snaps and 2016's More Is Not Better.
- They describe a split between "in-process" communication which is rapid, flexible, innovative and informal, and "archival" communication. The former is more important in establishing standing in a field, where the latter is more important in establishing standing in an institution.
- They suggest that "the quality of peer review may be declining" with "a growing tendency to rely on secondary measures", "difficult[y] for reviewers in standard fields to judge submissions from compound disciplines", "difficulty in finding reviewers who are qualified, neutral and objective in a fairly closed academic community", "increasing reliance ... placed on the prestige of publication rather than ... actual content", and that "the proliferation of journals has resulted in the possibility of getting almost anything published somewhere" thus diluting "peer-reviewed" as a brand.
Despite its deleterious effects, brand-stretching isn't the fundamental problem. In 2013's Journals Considered Harmful I pointed to the conclusions of Deep Impact: Unintended consequences of journal rank by Björn Brembs and Marcus Munaf:
The current empirical literature on the effects of journal rank provides evidence supporting the following four conclusions: 1) Journal rank is a weak to moderate predictor of scientific impact; 2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability; 3) Journal rank is expensive, delays science and frustrates researchers; and, 4) Journal rank as established by [Impact Factor] violates even the most basic scientific standards, but predicts subjective judgments of journal quality.The idea that journals can be ranked in terms of "quality", that higher quality journals perform more rigorous peer review, and thus that the papers they publish are of higher quality is just wrong. For example, Rabin quotes the editor of The Lancet:
Dr. Horton called the paper retracted by his journal a “fabrication” and “a monumental fraud.” But peer review was never intended to detect outright deceit, he said, and anyone who thinks otherwise has “a fundamental misunderstanding of what peer review is.”The higher the perceived quality of the journal, the greater the incentives for hype and fraud. The evidence that pre-publication peer review rarely detects fraud is overwhelming. But post-publication peer review, as in these cases, is better:
“If you have an author who deliberately tries to mislead, it’s surprisingly easy for them to do so,” he said.
The retracted paper in The Lancet should have raised immediate concerns, [Dr. Peter Jüni] added. It purported to rely on detailed medical records from 96,000 patients with Covid-19, the illness caused by the coronavirus, at nearly 700 hospitals on six continents. It was an enormous international registry, yet scientists had not heard of it.Probably no-one suspected fraud because Harvard:
The data were immaculate, he noted. There were few missing variables: Race appeared to have been recorded for nearly everyone. So was weight. Smoking rates didn’t vary much between continents, nor did rates of hypertension.
“I got goose bumps reading it,” said Dr. Jüni, who is involved in clinical trials of hydroxychloroquine. “Nobody has complete data on all these variables. It’s impossible. You can’t.
Both retracted studies were led by Dr. Mandeep R. Mehra, a widely published and highly regarded professor of medicine at Harvard, and the medical director of the Heart and Vascular Center at Brigham and Women’s Hospital.It is difficult for reviewers to be appropriately critical of studies led by prominent researchers, let alone accuse them of fraud. Especially since they're likely to be on the editorial board of journals in which the reviewer aspires to publish. Choosing the best reviewers is popularly supposed to be part of the value that elite journals add. But:
“This got as much, if not more, review and editing than a standard regular track manuscript,” Dr. Rubin, the editor in chief of the N.E.J.M., said of the heart study appearing in the N.E.J.M., which was based on a smaller set of Surgisphere data. “We didn’t cut corners. We just didn’t ask the right people.”Rabin sums up with a quote that, except for the pandemic mention, could have come anytime in the last decade:
“We are in the midst of a pandemic, and science is moving really fast, so there are extenuating circumstances here,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which tracks discredited research.
“But peer review fails more often than anyone admits,” he said. “We should be surprised it catches anything at all, the way it’s set up.”
Scientists Take Aim at Another Coronavirus Study in a Major Journal by Apoorva Mandavilli is yet another example of failed pre-publication peer review, this time in PNAS. Immediately it appeared it was both widely publicized and criticized by experts:
"A group of leading scientists is calling on a journal to retract a paper on the effectiveness of masks, saying the study has “egregious errors” and contains numerous “verifiably false” statements.
The scientists wrote a letter to the journal editors on Thursday, asking them to retract the study immediately “given the scope and severity of the issues we present, and the paper’s outsized and immediate public impact.” ... The study now under fire was published on June 11 in the journal Proceedings of the National Academy of Sciences. The lead author is Mario Molina, who won the Nobel Prize in Chemistry in 1995, with two other scientists, ... Experts said the paper’s conclusions were similar to those from others — masks do work — but they objected to the methodology as deeply flawed.
And, yes, PNAS didn't subject the paper to proper peer review:
"The paper was submitted under a little-known proviso, called the “contributed” track, by which members of the National Academies are permitted to solicit their own peer reviews and to submit them to P.N.A.S. along with the manuscript. About 20 percent of the papers that P.N.A.S. publishes are handled in this way, according to an analysis in 2016."
Peer review should be blind, so the reviewers don't know who the authors are because, as in this case and the two in the post, a prominent author suppresses critical reviews. And authors should never be allowed to chose their own reviewers. How can elite journals such as PNAS claim high-quality peer review when they violate these two basic principles?
Katyanna Quach reports that:
"Springer Nature has reversed its decision to publish a paper describing a neural network supposedly capable of detecting criminals from their faces alone – after top boffins signed a letter branding the study harmful junk science.
The missive, backed this week by 1,168 researchers, students, and engineers, and addressed to the academic publisher's editorial committee, listed numerous studies that rubbished the suggestion criminality can be predicted by algorithms from something as trivial as your face."
Post a Comment