- A few days ago an analyst's report on Reed Elsevier points out, as I did, that Elsevier cannot generate organic growth from their existing market because their customers don't have the money.
- A fascinating blog interview between two self-publishing e-book authors reveals that the Kindle is providing them a profitable business model. John Locke then held the #1, #4 and #10 spots on the Amazon Top 100, with another 3 books in the top 40. Joe Konrath had the #35 spot. Of the top 100, 26 slots were held by independent authors. John and Joe had been charging $2.99 per download, of which Amazon gave them 70%. When they dropped the price to $0.99 per download of which Amazon only gives them 35%, not just their sales but also their income exploded. John is making $1800/day from $0.99 downloads. Kevin Kelly predicts that in 5 years, the average price of e-books will be $0.99. As he points out:
$1 is near to the royalty payment that an author will receive on, say, a paperback trade book. So in terms of sales, whether an author sells 1,000 copies themselves directly, or via a traditional publishing house, they will make the same amount of money.
If publishers were doing all the things they used to do to promote books, maybe this would not be a problem. But they aren't. Tip of the hat to Slashdot.
I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation.
Tuesday, March 15, 2011
Bleak Future of Publishing
In my JCDL2010 keynote last June I spoke about the bleak future for publishers in general, and academic journal publishers such as Elsevier in particular. As I expected, I was met with considerable skepticism. Two recent signs indicate that I was on the right track:
How Few Copies?
I spoke at the Screening the Future 2011 conference at the Netherlands Beeld en Geluid in Hilversum, on the subject of "How Few Copies?". Below the fold is an edited text of the talk with links to the resources.
Tuesday, March 8, 2011
ACM/IEEE copyright policy
Matt Blaze is annoyed at the ACM and IEEE copyright policy. So am I. In an update to his post he reports:
If the "prominent member" wants the full details, they are available in the ACM's own Transactions on Computing Systems Vol. 23 No. 1, February 2005, pp 2-50.
A prominent member of the ACM asserted to me that copyright assignment and putting papers behind the ACM's centralized "digital library" paywall is the best way to ensure their long-term "integrity". That's certainly a novel theory; most computer scientists would say that wide replication, not centralization, is the best way to ensure availability, and that a centrally-controlled repository is more subject to tampering and other mischief than a decentralized and replicated one.This is deeply ironic, because ACM bestowed both a Best Paper award and an ACM Student Research award on Petros Maniatis, Mema Roussopoulos, TJ Giuli, David S.H. Rosenthal, Mary Baker, and Yanto Muliadi, "Preserving Peer Replicas By Rate-Limited Sampled Voting", 19th ACM Symposium on Operating Systems Principles (SOSP) , Bolton Landing, NY, October, 2003. for demonstrating that the "prominent member" is wrong and Matt is right.
If the "prominent member" wants the full details, they are available in the ACM's own Transactions on Computing Systems Vol. 23 No. 1, February 2005, pp 2-50.
Deduplicating Devices Considered Harmful
In my brief report from FAST11 I mentioned that Michael Wei's presentation of his paper on erasing information from flash drives (PDF) revealed that at least one flash controller was, by default, doing block-level deduplication of data written to it. I e-mailed Michael about this, and learned that the SSD controller in question is the SandForce SF-1200. This sentence is a clue:
It is easy to see the attraction of this idea. Flash controllers need a block re-mapping layer, called the Flash Translation Layer (FTL) (PDF) and, by enhancing this layer to map all logical blocks written with identical data to the same underlying physical block, the number of actual writes to flash can be reduced, the life of the device improved, and the write bandwidth increased. However, it was immediately obvious to me that this posed risks for file systems. Below the fold is an explanation.
File systems write the same metadata to multiple logical blocks as a way of avoiding a single block failure causing massive, or in some cases total, loss of user data. An example is the superblock in UFS. Suppose you have one of these SSDs with a UFS file system on it. Each of the multiple alternate logical locations for the superblock will be mapped to the same underlying physical block. If any of the bits in this physical block goes bad, the same bit will go bad in every alternate logical superblock,
I discussed this problem with Kirk McKusick, and he with the ZFS team. In brief, that devices sometimes do this is very bad news indeed, especially for file systems such as ZFS intended to deliver the level of reliability that large file systems need.
Thanks to the ZFS team, here is a more detailed explanation of why this is a problem for ZFS. For critical metadata (and optionally for user data) ZFS stores up to 3 copies of each block. The checksum of each block is stored in its parent, so that ZFS can ensure the integrity of its metadata before using it. If corrupt metadata is detected, it can find an alternate copy and use that. Here are the problems:
DuraWrite technology extends the life of the SSD over conventional controllers, by optimizing writes to the Flash memory and delivering a write amplification below 1, without complex DRAM caching requirements.This controller is used in SSDs from, for example, Corsair, ADATA and Mushkin.
It is easy to see the attraction of this idea. Flash controllers need a block re-mapping layer, called the Flash Translation Layer (FTL) (PDF) and, by enhancing this layer to map all logical blocks written with identical data to the same underlying physical block, the number of actual writes to flash can be reduced, the life of the device improved, and the write bandwidth increased. However, it was immediately obvious to me that this posed risks for file systems. Below the fold is an explanation.
File systems write the same metadata to multiple logical blocks as a way of avoiding a single block failure causing massive, or in some cases total, loss of user data. An example is the superblock in UFS. Suppose you have one of these SSDs with a UFS file system on it. Each of the multiple alternate logical locations for the superblock will be mapped to the same underlying physical block. If any of the bits in this physical block goes bad, the same bit will go bad in every alternate logical superblock,
I discussed this problem with Kirk McKusick, and he with the ZFS team. In brief, that devices sometimes do this is very bad news indeed, especially for file systems such as ZFS intended to deliver the level of reliability that large file systems need.
Thanks to the ZFS team, here is a more detailed explanation of why this is a problem for ZFS. For critical metadata (and optionally for user data) ZFS stores up to 3 copies of each block. The checksum of each block is stored in its parent, so that ZFS can ensure the integrity of its metadata before using it. If corrupt metadata is detected, it can find an alternate copy and use that. Here are the problems:
- If the stored metadata gets corrupted, the corruption will apply to all copies, so recovery is impossible.
- To defeat this, we would need to put a random salt into each of the copies, so that each block would be different. But the multiple copies are written by scheduling multiple writes of the same data in memory to different logical block addresses on the device. Changing this to copy the data into multiple buffers, salt them, then write each one once would be difficult and inefficient.
- Worse, it would mean that the checksum of each of the copies of the child block would be different; at present they are all the same. Retaining the identity of the copy checksums would require excluding the salt from the checksum. But ZFS computes the sum of every block at a level in the stack where the kind of data in the block is unknown. Loosing the identity of the copy checksums would require changes to the on-disk layout.
Subscribe to:
Posts (Atom)