I really enjoy these meetings, but a good deal of previous meetings was a dialog of the deaf. People doing preservation said "what I care about is the cost of storing data for the long term". Vendors said "look at how fast my shiny new hardware can access your data". The vendor's mindset is based on Kryder's Law - storage gets 40% cheaper each year so if you can afford to keep data for a few years you can afford to keep it forever. Thus, once you've paid for the short-term part of storage, the long-term part is free, and thus uninteresting to vendors. The interesting thing at this meeting is that even vendors are talking about the cost.
When I spoke here last year about the economic model of long-term storage we were beginning work on, I started by pointing to a paper by Mark Kryder and Chang Soo Kim assessing the prospects for solid state storage versus disk through 2020. They too assumed that Kryder's Law continued at 40%/yr. I got quite a bit of pushback against my assertion that Kryder's Law was already starting to flatten out. But a few weeks later the floods in Thailand destroyed 40% of the world's disk manufacturing capacity. Disk prices doubled overnight and are not expected to return to pre-flood levels until 2014. There is a consensus at this meeting that the days of 40%/yr Kryder rates are over, at least for disk. To understand the impact of this, we need for a model of long-term storage economics.
Bill McKibben's Rolling Stone article "Global Warming's Terrifying New Math"uses three numbers to illustrate the looming crisis. Here are three numbers that illustrate the looming crisis in long-term storage, its cost:
- According to IDC, the demand for storage each year grows about 60%.
- According to IHS iSuppli, the bit density on the platters of disk drives will grow no more than 20%/year for the next 5 years.
- According to computereconomics.com, IT budgets in recent years have grown between 0%/year and 2%/year.
We need to focus not on how to get at data faster, but on the triangle between the blue and green lines. We some combination of an increase in the IT budget by orders of magnitude, a radical reduction in the rate at which we store new data, or a radical reduction in the cost of storage. I know which preservationists prefer.
I've spent the last year looking at the numbers behind the "affordable cloud storage" hype. We ran a LOCKSS box in Amazon's cloud and collected detailed cost numbers. I looked at the pricing history of cloud storage services. And I used our prototype economic model to compare cloud and local storage costs. We just submitted a detailed report to the Library of Congress, which funded the work.
We model a chunk of data through time as it migrates from one generation of storage media to its successors. The goal is to compute the endowment, the capital needed to fund the chunk's preservation for, in our case, 100 years. The price per byte of each media generation is set by a Kryder's Law parameter. Each technology also has running costs, and costs for moving in and moving out. Interest rates are set each year using a model based on the last 20 years of inflation-protected Treasuries. I should add the caveat that this is a still a prototype, so the numbers it generates should not be relied on. But the shapes of the graphs seem highly plausible.
We need a baseline, the cost of local storage, to compare with cloud costs. It should lean over backwards to be fair to cloud storage. I don't know of a lot of good data to base this on; I use numbers from Backblaze, a PC backup service which publishes detailed build and ownership costs for their 4U 135B storage pods. I take their 2011 build cost, and increase it to reflect the 60% increase in disk cost since the Thai floods. Based on numbers from San Diego Supercomputer Center and Google, I add running costs so that the hardware cost is only 1/3 of the total 3-year cost of ownership. Note that this is much more expensive than Backblaze's published running cost. I add move-in and move-out costs of 20% of the purchase price in each generation. Then I multiply the total by three to reflect three geographically separate replicas.
The result is this graph, plotting the endowment needed to have a 98% chance of not running out of money in 100 years against the Kryder rate. In the past, with Kryder rates in to 30-40% range, we were in the flatter part of the graph where the precise Kryder rate wasn't that important in predicting the long-term cost. As Kryder rates decease, we move into the steep part of the graph, which has two effects:
- The cost increases sharply.
- The cost becomes harder to predict, because it depends strongly on the precise Kryder rate.
The result is this graph, showing that S3 is not competitive with local storage at any Kryder rate. This comparison is misleading. It assumes that local storage and S3 experience the same Kryder rate.
Here is the history of the prices several major cloud storage services have charged since their launch:
- Amazon's S3 launched March '06 at $0.15/GB/mo and is now $0.125/GB/mo, a 3%/yr drop.
- Rackspace launched May '08 at $0.15/GB/mo and has not changed.
- Azure launched November '09 at $0.15/GB/mo and is now $0.14/GB/mo, a 3%/yr drop.
- Google launched October '11 at $0.13/GB/mo and has not changed.
If local storage's Kryder rate matches IHS' 20% and if S3's is their historic 3% the endowment needed in S3 is more than 5 times larger than in local storage, and depends much more strongly on the Kryder rate. This graph raises two obvious questions.
First, why don't S3's prices drop as the cost of the underlying storage drops? The answer is that they don't need to. Their customers are locked-in by bandwidth charges. S3 has the bulk of the market with their current prices. Their competitors match or even exceed their prices. Why would Amazon cut prices?
Second, why is S3 so much more expensive than local storage? After all, even using S3's Reduced Redundancy Storage to store 135TB, you would pay in the first month almost enough to buy the hardware for one of Backblaze's storage pods. The answer is that, for the vast majority of S3's customers, it isn't that expensive. First, they are not in the business of long-term storage. Their data has a shelf-life much shorter than the life of the drives, so they cannot amortize across the full life of the media. Second, their demand for storage has spikes. By using S3, they avoid paying for the unused capacity to cover the spikes.
Long-term storage has neither of these characteristics, and this makes S3's business model inappropriate for long-term storage. Amazon recently admitted as much when they introduced Glacier, a product aimed specifically at long-term storage, with headline pricing between 5 and 12 times cheaper than S3.
To make sure that Glacier doesn't compete with S3, Amazon gave it two distinguishing characteristics. First, there is a unpredictable delay between requesting data and getting it. Amazon says this will average about 4 hours, but they don't commit to either an average or a maximum time. Second, the pricing for access to the data is designed to discourage access. There is a significant per-request charge, to motivate access in large chunks. Although you are allowed to access 5% of your data each month with no per-byte charge, the details are complex and hard to model.
As I understand it, if on any day of the month you exceed the pro-rated free allowance (i.e. about 0.17% depending on the month), you are charged for the whole month as if you had sustained your peak hourly rate in that month for the entire month. Thus, to model Glacier I had to make some fairly heroic assumptions:
- No accesses to the content other than for integrity checks.
- Accesses to the content for integrity checking are generated at a precisely uniform rate.
- Each request is for 1GB of content.
- One reserved AWS instance used for integrity checks
But this is not an apples-to-apples comparison. Both local storage and S3 provide adequate access to the data. Glacier's long latency and severe penalties for unplanned access mean that, except for truly dark archives, it isn't feasible to use Glacier as the only repository. Even for dark archives, Glacier's access charges provide a very powerful lock-in. Getting data out of Glacier to move it to a competitor in any reasonable time-frame would be very expensive, easily as much as a year's storage.
Providing adequate access to justify preserving the content, and avoiding getting locked-in to Amazon, requires maintaining at least one copy outside Glacier. If we maintain one copy of our 135TB example in Glacier with 20-month integrity checks experiencing a 3% Kryder rate, and one copy in local storage experiencing a 20% Kryder rate (instead of the three in our earlier local storage examples), the endowment needed would be $517K. The endowment needed for three copies in local storage at a 20% Kryder rate would be $486K. Given the preliminary state of our economic model, this is not a significant difference.
Replacing two copies in local storage with one copy in Glacier would not significantly reduce costs, instead it might increase them slightly. Its effect on robustness would be mixed, with 4 versus 3 total copies (effectively triplicated in Glacier, plus local storage) and greater system diversity, but at the cost of less frequent integrity checks. We conclude:
- It is pretty clear that services like S3 are simply too expensive to use for digital preservation. The reasons for this are mostly business rather than technical.
- Access and lock-in considerations make it very difficult for a digital preservation system to use Glacier as its only repository.
- Even with generous assumptions, it isn't clear that using Glacier to replace all but one local store reduces costs or enhances overall reliability. Systems that combine Glacier with local storage, or with other cloud storage systems, will need to manage accesses to the Glacier copy very carefully if they are not to run up large access costs.
Post a Comment