The authors draw the following conclusions:
The current empirical literature on the effects of journal rank provides evidence supporting the following four conclusions: 1) Journal rank is a weak to moderate predictor of scientific impact; 2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability; 3) Journal rank is expensive, delays science and frustrates researchers; and, 4) Journal rank as established by [Impact Factor] violates even the most basic scientific standards, but predicts subjective judgments of journal quality.Even if you disagree with their conclusions, their extensive bibliography is a valuable resource. Below the fold I discuss selected quotes from the paper.
Among the revealing examples they cite, one beautifully illustrates the inability of reviewers and editors in practice to perform even their most basic functions. I pointed this out four years ago:
the recent discovery of a ‘Default-Mode Network’ in rodent brains was, presumably, made independently by two different sets of neuroscientists and published only within a few months of each other. Perhaps because of journal rank, the later publication in the higher ranking journal was mentioned in a subsequent high-ranking publication. It is straightforward to project that the later publication will now go on to be cited more often than the earlier report of the same discovery in a lower ranking journal, especially since the later publication did not cite the earlier one, despite the final version having been submitted months after the earlier publication appeared.You have to love the "presumably". Another shows the Impact Factor (IF) is not merely scientifically indefensible, but is also completely corrupt:
The fact that publishers have the option to negotiate how their IF is calculated is well-established – in the case of PLoS Medicine, the negotiation range was between 2 and about 11. What is negotiated is the denominator in the IF equation (i.e., which published articles which are counted), given that all citations count towards the numerator whether they result from publications included in the denominator or not. ... For instance, the numerator and denominator values for or values for Current Biology in 2002 and 2003 indicate that while the number of citations remained relatively constant, the number of published articles dropped. This decrease occurred after the journal was purchased by Cell Press (an imprint of Elsevier), despite there being no change in the layout of the journal. Critically, the arrival of a new publisher corresponded with a retrospective change in the denominator used to calculate IF.The authors observe that:
Inasmuch as journal rank guides the appointment and promotion policies of research institutions, the increasing rate of misconduct that has recently been observed may prove to be but the beginning of a pandemic: It is conceivable that, for the last few decades, research institutions world-wide may have been hiring and promoting scientists who excel at marketing their work to top journals, but who are not necessarily equally good at conducting their research. Conversely, these institutions may have purged excellent scientists from their ranks, whose marketing skills did not meet institutional requirements. If this interpretation of the data is correct, we now have a generation of excellent marketers (possibly, but not necessarily also excellent scientists) as the leading figures of the scientific enterprise, constituting another potentially major contributing factor to the rise in retractions. This generation is now in charge of training the next generation of scientists, with all the foreseeable consequences for the reliability of scientific publications in the future.The authors call for reform but point out that current proposals are insufficiently radical, perhaps because they underestimate the depth of corruption with which they have to deal, and thus inadequate:
the three models which are currently aimed at publishing reform are not sustainable in the long term. First, Gold Open Access publishing without abolishment of journal rank (or strong market regulation with, e.g. strict price caps) will lead to a luxury segment in the market, as evidenced not only by suggested article processing charges nearing 40,000€ (US$50,000) for the highest-ranking journals, but also by the correlation of existing article processing charges with journal rank. Such a luxury segment would entail that only the most affluent institutions or authors would be able to afford publishing their work in high-ranking journals, anathema to the supposed meritocracy of science. Hence, universal, unregulated Gold Open Access is one of the few situations we can imagine that would potentially be even worse than the current status quo. Second, Green Open Access publishing, while expected to be more cost-effective for institutions than Gold Open Access, entails twice the work on the part of the authors and needs to be mandated and enforced to be effective, thus necessitating an additional layer of bureaucracy, on top of the already unsustainable status quo, which would not be seriously challenged. Moreover, some publishers have excluded any cooperation with green publishing schemes. Third, Hybrid Open Access publishing inflates pricing and allows publishers to not only double-dip into the public purse, but to triple-dip. Thus, Hybrid Open Access publishing is probably the most expensive option overall.Fundamentally, their argument is that the very concept of a journal, as a collection of articles with a brand that can be interpreted as denoting quality, is in and of itself harmful because:
- it provides strong incentives to exaggeration and fraud;
- the selection process cannot in practice distinguish between high and low quality.
2 comments:
Charles Day, Physics Today's online editor, has an interesting blog post on this paper here.
Sabine Hossenfelder has another here.
The editors of Infection and Immunity in 2011 reflected on a spate of recent retractions and wrote a paper Retracted Science and the Retraction Index. They conclude:
"Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor."
Post a Comment