Thursday, February 6, 2014

Worth Reading

I'm working on a long post about a lot of interesting developments in storage, but right now they are happening so fast I have to keep re-writing it. In the meantime, follow me below the fold for links to some recent posts on other topics that are really worth reading.

First, an excellent and thorough explanation from Cory Doctorow of why Digital Rights Management is such a disaster, not just for archives but for everyone. This is a must-read even for people who are used to the crippling effects of DRM and the DMCA on preservation, because Cory ends by proposing a legal strategy to confront them. It would be one thing if there were major benefits to offset the many downsides of DRM, but there aren't. Ernesto at TorrentFreak pointed to research by Laurina Zhang of the University of Toronto:
It turns out that consumers find music with DRM less attractive than the pirated alternative, and some people have argued that it could actually hurt sales. A new working paper published by University of Toronto researcher Laurina Zhang confirms this.
For her research Zhang took a sample of 5,864 albums from 634 artists and compared the sales figures before and after the labels decided to drop DRM.
“I exploit a natural experiment where the four major record companies – EMI, Sony, Universal, and Warner – remove DRM on their catalogue of music at different times to examine whether relaxing an album’s sharing restrictions impacts the level and distribution of sales,” she explains.
This is the first real-world experiment of its kind, and Zhang’s findings show that sales actually increased after the labels decided to remove DRM restrictions. “I find that the removal of DRM increases digital sales by 10%,” Zhang notes.
Second, three posts on the effects of neo-liberal policies. Stanford's Chris Bourg gave a talk at Duke entitled The Neoliberal Library: Resistance is not futile making the case that:
Neoliberalism is toxic for higher education, but research libraries can & should be sites of resistance.
On the LSE's Impact of Social Sciences blog, Joanna Williams of the University of Kent makes similar arguments with a broader focus in a post entitled ‘Value for money’ rhetoric in higher education undermines the value of knowledge in society:
For some students, value for money may just mean getting what they want – satisfaction in the short term and a high level qualification – for minimal effort. The role of universities should be to challenge this assumption. But the notion that educational quality can be driven upwards by a market based on perceived value for money is more likely to lead to a race to the bottom in terms of educational standards as branding, reputation management and the perception of quality all become more important than confronting students with intellectual challenge.
On the same blog, Eric Kansa of Open Context has a post entitled It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough. The key to his argument is "be careful what you measure, because that is what you will get":
One’s position as a subordinate in today’s power structures is partially defined by living under the microscope of workplace monitoring. Does such monitoring promote conformity? The freedom, innovation, and creativity we hope to unlock through openness requires greater toleration for risk. Real and meaningful openness means encouraging out-of-the-ordinary projects that step out of the mainstream. Here is where I’m skeptical about relying upon metrics-based incentives to share data or collaborate on GitHub.
By the time metrics get incorporated into administrative structures, the behaviors they measure aren’t really innovative any more!
Worse, as certain metrics grow in significance (meaning – they’re used in the allocation of money), entrenched constituencies build around them. Such constituencies become interested parties in promoting and perpetuating a given metric, again leading to conformity.
Third, I've been pointing to the importance of text-mining the scientific literature since encountering Peter Murray-Rust (link found by Memento!) at the 2007 workshop that started this blog. Nature reports on Elsevier's move to allow researchers to download articles in XML in bulk for this purpose, under some restrictive conditions. Other publishers will follow:
CrossRef, a non-profit collaboration of thousands of scholarly publishers, will in the next few months launch a service that lets researchers agree to standard text-mining terms and conditions by clicking a button on a publisher’s website, a ‘one-click’ solution similar to Elsevier’s set-up.
But these terms and conditions may be preempted:
The UK government aims this April to make text-mining for non-commercial purposes exempt from copyright, allowing academics to mine any content they have paid for.
On the liblicense mailing list, Michael Carroll of American University argues that in the US subscribers already have the right to text-mine under copyright, but even if he is right this is yet another case where contract trumps copyright. Effective text-mining requires access to the content in bulk, not as individual articles, and ideally in a form more suited to the purpose than PDF. Bulk access to XML is the important part of what Elsevier is providing. Their traditional defenses against bulk downloading make the theoretical right to text-mine without permission in the US and perhaps shortly in the UK pretty much irrelevant.

Elsevier hasn't helped relations with researchers recently by issuing take-down notices for papers their authors had posted to academia.edu. Of course, it was stupid of the authors to post Elsevier's PDF rather than their own, but it wasn't good PR. See here for an interesting discussion of the question, which I thought was settled, as to whether transfer of copyright transfers the rights to every version leading up to the version transferred.

Fourth, a brief but important note on the concept of the Internet by one of those present at its birth, David Reed.

Finally, I was very skeptical of the New York Times paywall even if early experience was encouraging. Ryan Chittum at CJR reported last August that:
But for now, the pile of paywall money is still growing and for the first time, the Times Company has broken out how big it is: More than $150 million a year, including the Boston Globe, ... To put that $150 million in new revenue in perspective, consider that the Times Company as a whole will take in roughly $210 million in digital ads this year. And that $150 million doesn’t capture the paywall’s positive impact on print circulation revenue. Altogether, the company has roughly $360 million in digital revenue.
 One of my financial advisors writes:
On September 23rd the New York Times’ Board of Directors elected to reinstate the company’s quarterly dividend at a rate of $.04/share. ... This decision was based on the continued and dramatic improvement in the company’s balance sheet, which is now net cash positive and shows almost $1 billion in cash and equivalents, along with improved operating margins and cash flows. In the past two years sales of non-core assets have totaled approximately $700 million as management continues to focus on the “New York Times” brand. ... The company continues to have the capacity to generate cash flow of $2.00 - $2.50/share which should drive the business value, and dividend capacity, even higher.
I may have been wrong.

2 comments:

  1. On the data-mining topic, Twitter is now accepting applications from researchers for access to their data. They have until March 15 to submit a proposal. This should take some of the pressure off the Library of Congress to allow access to their Twitter collection.

    ReplyDelete
  2. On the DRM topic, analysis of earnings from best-selling independent vs. Big 5 publisher authors on Amazon shows that DRM roughly halves e-book sales. The entire report is worth reading.

    ReplyDelete