Last September Cameron Neylon had an important post entitled Policy Design and Implementation Monitoring for Open Access
We know that those Open Access policies that work are the ones that have teeth. Both institutional and funder policies work better when tied to reporting requirements. The success of the University of Liege in filling its repository is in large part due to the fact that works not in the repository do not count for annual reviews. Both the NIH and Wellcome policies have seen substantial jumps in the proportion of articles reaching the repository when grantees final payments or ability to apply for new grants was withheld until issues were corrected.
He points out that:
Monitoring Open Access policy implementation requires three main steps. The steps are:
Each of these steps are difficult or impossible in our current data
environment. Each of them could be radically improved with some small
steps in policy design and metadata provision, alongside the wider
release of data on funded outputs.
- Identify the set of outputs are to be audited for compliance
- Identify accessible copies of the outputs at publisher and/or repository sites
- Check whether the accessible copies are compliant with the policy
He makes three important recommendations:
- Identification of Relevant Outputs: Policy design should include mechanisms for identifying and publicly listing outputs that are subject to the policy. The use of community standard persistable and unique identifiers should be strongly recommended. Further work is needed on creating community mechanisms that identify author affiliations and funding sources across the scholarly literature.
- Discovery of Accessible Versions: Policy design should express compliance requirements for repositories and journals in terms of metadata standards that enable aggregation and consistent harvesting. The infrastructure to enable this harvesting should be seen as a core part of the public investment in scholarly communications.
- Auditing Policy Implementation: Policy requirements should be expressed in terms of metadata requirements that allow for automated implementation monitoring. RIOXX and ALI proposals represent a step towards enabling automated auditing but further work, testing and refinement will be required to make this work at scale.
What he is saying is that defining policies that mandate certain aspects of Web-published materials without mandating that they conform to standards that make them enforceable over the Web is futile. This should be a no-brainer. The idea that, at scale, without funding, conformance will be enforced manually is laughable. The idea that researchers will voluntarily comply when they know that there is no effective enforcement is equally laughable.
I don't expect you to know that all the so-called "Potemkin villages" are nowadays flourishing cities :)
Post a Comment