Saturday, October 13, 2007

Who's looking after the snowman?

In a post to the liblicense mailing list James O'Donnell, Provost of Georgetown University, asks:

"So when I read ten years from now about this cool debate among Democratic candidates that featured video questions from goofy but serious viewers, including a snowman concerned about global warming, and people were watching it on YouTube for weeks afterwards: how will I find it? Who's looking after the snowman?


This is an important question. Clearly, future scholars will not be able to understand the upcoming election without access to YouTube videos, blog posts and other ephemera. In this particular case, I believe there are both business and technical reasons why Provost O'Donnell can feel somewhat reassured, and legal and library reasons why he should not. Follow me below the fold for the details.

Here is the pre-debate version of the snowman's video, and here are the candidates' responses. CNN, which broadcast the debate, has the coverage here. As far as I can tell the Internet Archive doesn't collect videos like these.

From a business point of view, YouTube videos are a business asset of Google, and will thus be preserved with more than reasonable care and attention. As I argued here, content owned by major publishing corporations (which group now includes Google) is at very low risk of accidental loss; the low and rapidly decreasing cost per byte of storage makes the business decision to keep it available rather than take it down a no-brainer. And that is ignoring the other aspects of the Web's Long Tail economics which mean that the bulk of the revenue comes from the less popular content.

Technically, YouTube video is Flash Video. It can easily be downloaded, for example by this website. The content is in a widely used web format that has an open-source player, in this case at least two (MPlayer and VLC). It is thus perfectly feasible to preserve it, and for the reasons I describe here the open source players make it extraordinarily unlikely that it would not be possible to play the video in 10, or even 30 years. If someone collects the video from YouTube and preserves the bits, it is highly likely that the bits will be viewable indefinitely.

But, will anyone other than Google actually collect and preserve the bits? Provost O'Donnell's library might want to do so, but the state of copyright law places some tricky legal obstacles in the way. Under the DMCA, preserving a copy of copyright content requires the copyright owner's permission. Although I heard rumors that CNN would release video of the debate under a Creative Commons license, on their website there is a normal "All Rights Reserved" copyright notice. And on YouTube, there is no indication of the copyright status of the videos. A library downloading the videos would have to assume it didn't have permission to preserve them. It could follow the example of the Internet Archive and depend on the "safe harbor" provision, complying with any "takedown letters" by removing them. This is a sensible approach for the Internet Archive, which aims to be a large sample of the Web, but not for the kind of focused collections Provost O'Donnell has in mind.

The DMCA, patents and other IP restrictions place another obstacle in the way. I verified that an up-to-date Ubuntu Linux system using the Totem media player plays downloaded copies of YouTube videos very happily. Totem uses the GStreamer media framework with plugins for specific media. Playing the YouTube videos used the FFmpeg library. As with all software, it is possible that some patent holder might claim that it violated their patents, or that in some way it could be viewed as evading some content protection mechanism as defined by the DMCA. As with all open source software, there is no indemnity from a vendor against such claims. Media formats are so notorious for such patent claims that Ubuntu segregates many media plugins into separate classes and provides warnings during the install process that the user may be straying into a legal gray area. The uncertainty surrounding the legal status is carefully cultivated by many players in the media market, as it increases the returns they may expect from what are, in many cases, very weak patents and content protection mechanisms. Many libraries judge that the value of the content they would like to preserve doesn't justify the legal risks of preserving it.

Tuesday, October 9, 2007

Workshop on Preserving Government Information

Here is an announcement of a workshop in Oxford on Preserving & Archiving Government Information. Alas, our invitation arrived too late to be accepted, but the results should be interesting. Its sponsored by the Portuguese Management Centre for an e-Government Network (CEGER). Portugal's recent history of dictatorship tends to give them a realistic view of government information policies.

Wednesday, October 3, 2007

Update on Preserving the Record

In my post Why Preserve E-Journals? To Preserve the Record I used the example of government documents to illustrate why trusting web publishers to maintain an accurate record is fraught with dangers. The temptation to mount an "insider attack" to make the record less inconvenient or embarrassing is too much to resist.

Below the fold I report on two more examples, one from the paper world and one from the pre-web electronic world, showing the value of a tamper-evident record.


For the first example I'm indebted to Prof. Jeanine Pariser Plottel of Hunter College, who has compared the pre- and post-WWII editions of books published by right-wing authors in France and shown that the (right-wing) publishers sanitized the post-WWII editions to remove much of the anti-semitic rhetoric. Note that this analysis was possible only because the pre-WWII editions survived in libraries and private collections. They were widely distributed on durable, reasonably tamper-evident media. They survived invasion, occupation, counter-invasion and social disruption. It would have been futile for the publishers to claim that the pre-WWII editions had somehow been faked after the war to discredit the right. Prof. Plottel points to two examples of "common practice":

1. the books of Robert Brasillach (who was executed) edited by his brother-in-law Maurice Bardèche, Professor of 19th Century French Literature at the Sorbonne during the war, stripped of his post, after. The two men published an Histoire du cinéma in 1935. In subsequent editions published several times after the war beginning in 1947, the term "fascism" is replaced by "anti-communisme."

2. Lucien Rebatet's, Les décombres (1942), was one of the best-sellers of the Occupation, and it is virulently anti-Semitic. A new expurgated version was later published under the title Mémoire d'un fasciste. Who was Rebatet? you ask. Relegated to oblivion, I hope. Still, you may remember Truffaut's film, Le dernier métro (wonderful and worth seeing, if you haven't). The character Daxiat is modeled upon Rebatet.

In a web-only world it would have been much easier for the publishers to sanitize history. Multiple libraries keeping copies of the original editions would have been difficult under the DMCA. It must be doubtful whether the library copies would have survived the war. The publisher's changes would likely have remained undetected. Had they been detected the critics would have been much easier to discredit.

The second example is here. This fascinating paper is based on Will Crowther's original source code for ADVENT the pioneering work of interactive fiction that became, with help from Don Woods, the popular Adventure game. The author, Dennis Jerz, shows that the original was based closely on a real cave, part of Kentucky's Colossal Cave system. This observation was obscured by Don Woods' later improvements.

As the swift and comprehensive debunking of the allegations in SCO vs. IBM shows, archaeology of this kind for Open Source software is now routine and effective. This is because the code is preserved in third-party archives which use Source Code Control systems derived from Marc Rochkind's 1972 SCCS, and provide a somewhat tamper-evident record. Although Jerz shows Crowther's original ADVENT dates from the 1975-6 academic year, SCCS had yet to become widely used outside Bell Labs, and the technology needed for third-party repositories was a decade in the future. Jerz's work depended on Stanford's ability to recover data from backups of Don Woods' student account from 30 years ago; an impressive feat of system administration! Don Woods vouches for the recovered code, so there's no suspicion that it isn't authentic.

How likely is it that other institutions could recover 30-year old student files? Absent such direct testimony, how credible would allegedly recovered student files that old be? Yet they have provided important evidence for the birth of an entire genre of fiction.