Geoff Bilder's Open Persistent Identifier Infrastructures: The Key to Scaling Mandate Auditing and Assessment Exercises was ostensibly a report on the need for and progress in bringing together the many disparate identifier systems for organizations in order to facilitate auditing and assessment processes. It was actually an insightful rant about how these processes were corrupting the research ecosystem. Below the fold, I summarize Geoff's argument (I hope Geoff will correct me if I misrepresent him) and rant back.
The non-rant part of Geoff's talk started from the premise that researchers and their institutions are increasingly subject by funders and governments to assessments, such as the UK's Research Excellence Framework, and mandates, such as the Wellcome Trust's open access mandate. Compliance with the mandates has been generally poor.
Assessing how poor, and assessing the excellence of research both require an ample supply of high-quality metadata, which in principle Crossref is in a good position to supply. To assess research productivity, three main types of identifier are needed; content, contributor, and organization. Geoff used this three-legged stool image to show that:
- Content identifiers had converged on DOIs from DataCite and CrosseRef, although experience has shown that there are problems in the ways DOIs are used in practice.
- Contributor identifiers had converged on ORCID, although Herbert Van de Sompel, Michael Nelson and Martin Klein had shown the limits of ORCID coverage in terms of both subjects outside hard science, and of non-English-speaking countries.
- Organization identifiers had notably failed to converge. Last October CrossRef, DataCite and ORCID announced the Organization Identifier Project to address this problem.
I have a great counter-example. The physicist G. I. Taylor (my great-uncle) started in 1909 with the classic experiment which showed that interference fringes were still observed at such low intensity that only a single photon at a time was in flight. The following year at age 23 he was elected a Fellow of Trinity College, and apart from a few years teaching, he was able to pursue research undisturbed by any assessment for the next 6 decades. Despite this absence of pressure, he was one of the 20th century's most productive scientists, with four huge volumes of collected papers over a 60-year career.
Papers/year (linear) |
The Economist's Incentive Malus, ... is based on The natural selection of bad science by Paul E. Smaldino and Richard McElreath, which starts:
Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement....
The Economist reports Smaldino and McElreath's conclusion is bleak:
that when the ability to publish copiously in journals determines a lab’s success, then “top-performing laboratories will always be those who are able to cut corners”—and that is regardless of the supposedly corrective process of replication.
Papers/year (log-linear) |
The time for a serious, sustained international effort to halt publication pollution is now. Otherwise scientists and physicians will not have to argue about any issue—no one will believe them anyway.(see also John Michael Greer).
Post-PhD science career tracks |
My Ph.D. was in Mechanical Engineering, so I'm not a scientist in the sense the Royal Society uses. I was a post-doc (at Edinburgh University) and then research staff (at Carnegie-Mellon University) before moving to industry (initially at Sun Microsystems) and eventually back to academia (at Stanford). I've published quite a bit both from academia and from industry but I was never in the publish or perish rat-race. I was always assessed on the usefulness of the stuff I built; the pressures in engineering are different.
Research funding flows |
Besides, Ph.D.s leaving academia for industry is a good thing. Most of the "engineers" I worked with at my three successful Silicon Valley startups had Ph.D.s in physics, mathematics and computer science, not engineering. My Mech. Eng. Ph.D. was an outlier. Silicon Valley would not exist but for Ph.D.s leaving research to create products in industry.
1 comment:
Rebecca Hill at The Register notes that the Wellcome Trust has updated its open access policy to include, among other things, software:
"The existing data management and sharing policy, introduced in 2007, requires that grant holders make data available "in a timely and responsible manner, with as few restrictions as possible".
The new policy extends this to original software and research materials such as cell lines, reagents and antibodies."
This is important. Much of the time the data isn't really useful without the software. And it will give emulation increased importance.
Post a Comment