The basic idea of Solid is that each person would own a Web domain, the "host" part of a set of URLs that they control. These URLs would be served by a "pod", a Web server controlled by the user that implemented a whole set of Web API standards, including authentication and authorization. Browser-side apps would interact with these pods, allowing the user to:In his Paul Evan Peters Award Lecture, my friend Herbert Van de Sompel applied this concept to scholarly communication, envisaging a world in which access, for both humans and programs, to all the artifacts of research would be greatly enhanced.
Pods would have inboxes to receive notifications from other pods. So that, for example, if Alice writes a document and Bob writes a comment in his pod that links to it in Alice's pod, a notification appears in the inbox of Alice's pod announcing that event. Alice can then link from the document in her pod to Bob's comment in his pod. In this way, users are in control of their content which, if access is allowed, can be used by Web apps elsewhere.
- Export a machine-readable profile describing the pod and its capabilities.
- Write content for the pod.
- Control others access to the content of the pod.
In Herbert's vision, institutions would host their researchers "research pods", which would be part of their personal domain but would have extensions specific to scholarly communication, such as automatic archiving upon publication.Follow me below the fold for an update to my take on the practical possibilities of Herbert's vision.
This improved access would be enabled by metadata, generated both by the decentralized Web infrastructure and by the researchers, connecting the multifarious types of digital objects representing the progress of their research.
The key access improvements in Herbert's vision are twofold:
- Individuals, not platforms such as Google or Elsevier, control access to their digital objects.
- Digital objects in pods are described by, and linked by, standardized machine-actionable metadata.
Herbert was skeptical that transitioning scholarly communication in this way was achievable. I agreed with him at length in both Herbert Van de Sompel's Paul Evan Peters Award Lecture and It Isn't About The Technology, but didn't address the obvious question:
How much of the improved access in Herbert's vision could be implemented in the Web we have right now, rather than waiting for the pie-in-the-sky-by-and-by decentralized Web?Clearly, the academic publishing oligopoly and the copyright maximalists aren't going to allow us to implement the first part. Even were Open Access to become the norm, their track record shows it will be Open Access to digital objects they host (and in many cases under a restrictive license).
Elsevier's Research Infrastructure |
Thanks to generous funding from the Andrew W. Mellon Foundation (I helped write the grant proposal) a team at the Internet Archive is working on a two-pronged approach. Prong 1 starts from Web objects known to be scholarly outputs because, for example, they have been assigned a DOI and:
- Ensures that, modulo paywall barriers, they and the objects to which they link are properly archived by the Wayback Machine.
- Extracts and, as far as possible, verifies the bibliographic metadata for the archived objects.
- Implements access to the archived objects in the Wayback Machine via bibliographic rather than URL search.
- Extracts and, as far as possible, verifies the bibliographic metadata for the archived objects.
- Implements access to the archived objects in the Wayback Machine via bibliographic rather than URL search.
Fatcat entry for Joi Ito blog post |
Fatcat is versioned, publicly-editable catalog of research publications: journal articles, conference proceedings, pre-prints, blog posts, and so forth. The goal is to improve the state of preservation and access to these works by providing a manifest of full-text content versions and locations.Now, suppose Fatcat succeeds in its goals. It would provide a metadata infrastructure that could be enhanced to provide many of the capabilities Herbert envisaged, albeit in a centralized rather than a decentralized manner. The pod example above could be rewritten for the enhanced Fatcat environment thus:
This service does not directly contain full-text content itself, but provides basic access for human and machine readers through links to copies in web archives, repositories, and the public web.
Significantly more context and background information can be found in The Guide.
If Alice posts a document to the Web that Fatcat recognizes in the Wayback Machine's crawls as a research output, Fatcat will index it, ensure it and the things it links to are archived, and create a page for it. Suppose Bob, a researcher with a blog which Fatcat indexes via Bob's ORCID entry, writes a comment on one of her blog's post that links to Alice's document. Fatcat's crawls will notice the comment and:As a manually-created demonstration of what this enhanced Fatcat would look like take this important paper in Science's 27th January 2017 issue, Gender stereotypes about intellectual ability emerge early and influence children’s interests by Lin Bian, Sarah-Jane Leslie and Andrei Cimpian. The authors' affiliations are the University of Illinois, Champaign, New York University, and Princeton University. Here are the things I could find in about 90 minutes that the enhanced Fatcat would link to and from:
Because Fatcat exports its data via an API as JSON, the information about each document, including its links to other documents, is available in machine-actionable form to third-party services. They can create their own UIs, and aggregate the data in useful ways.
- Update the page for Bob's blog post to include a link to Alice's document.
- Update the page for Alice's document to include a link to Bob's comment.
- The paper at Science (paywalled)
- About 10 copies of the PDF on the open web.
- The supplementary materials.
- The data at the Open Science Framework.
- The authors: Lin Bian, Sarah-Jane Leslie, Andrei Cimpian.
- The authors' websites: Bian, Leslie, Cimpian.
- Leslie's Wikipedia page.
- The 31/35 citations that have DOIs from the supplementary materials.
- The 22/35 papers it cites that have PMIDs from europmc.org.
- The 4 books it cites via bibliographic metadata.
- The 22 papers with PMIDs that cite it from europmc.org.
- 32 instances of press coverage from Bian's website's Publications page.
- A related article in Scientific American by Cimpian and Leslie.
Linking together the various digital objects representing the outputs of a single research effort is at the heart of Herbert's vision. It is true that the enhanced Fatcat would be centralized, and thus potentially a single point of failure. And that it would be less timely, less efficient, and would lack granular access control (it can only deal with open access objects). But it's also true that the enhanced Fatcat avoids many of the difficulties of the decentralized version that I raised. They are caused by the presence of multiple copies of objects, for example in the personal pods of each member of a multitudinous research team, or at their various institutions.
Given that both Herbert and I express considerable skepticism as to the feasibility of implementing his vision even were a significant part of the Web to become decentralized, exploring ways to deliver at least some of its capabilities on a centralized infrastructure seems like a worthwhile endeavor.
Update: Herbert points out that related work is also being funded by the Mellon Foundation in a collaborative project between Los Alamos and Old Dominion called myresearch.institute:
The modules in the pipeline are as follows:Major differences between the two include:
- Discovery of new artifacts deposited by a researcher in a portal is achieved by a Tracker that recurrently polls the portal's API using the identity of the researcher in each portal as an access key. If a new artifact is discovered, its URI is passed on to the capture process.
- Capturing an artifact is achieved by using web archiving techniques that pay special attention to generating representative high fidelity captures. A major project finding in this realm is the use of Traces that abstractly describe how a web crawler should capture a certain class of web resources. A Trace is recorded by a curator through interaction with a web resource that is an instance of that class. The result of capturing a new artifact is a WARC file in an institutional archive. The file encompasses all web resources that are an essential part of the artifact, according to the curator who recorded the Trace that was used to guide the capture process.
- Archiving is achieved by ingesting WARC files from various institutions into a cross-institutional web archive that supports the Memento "Time Travel for the Web" protocol. As such, the Mementos in this web archive integrate seamlessly with those in other web archives.
- Targeted at specific platforms vs. generic Web.
- Researcher-centric vs. object-centric.
- Content-focused vs. metadata-focused.
- Curator-driven vs. automated collection.
1 comment:
Thanks for the write-up!
I manually added a couple of the additional resources you linked to (namely, dataset locations): https://fatcat.wiki/work/jgveddowizhs3i3zmubzfrjl4m
Though of course the entire point is to automate these processes... or better yet, have "the crowd" automate the parts they are interested in, and combine the results.
Post a Comment