Wednesday, June 27, 2012

Cloud vs. Local Storage Costs

In earlier posts I pointed out that for long-term use "affordable cloud storage" wasn't, because compared to local storage:
I recently used my prototype economic model of storage to make a more detailed comparison. As usual, the necessary caveat is that this is an unvalidated, prototype model. The actual numbers should be treated with caution but the general picture it provides is plausible.

The model computes the endowment needed to store 135TB of data for 100 years with a 98% chance of not running out of money at various Kryder's Law rates for two cases:
  • Amazon's S3, starting with their current prices, and assuming no other costs of any kind.
  • Maintaining three copies in RAID-6 local storage, starting with BackBlaze's hardware costs adjusted for the 60% increase in disk costs caused by the Thai floods since they were published, and following our normal assumption (based on work from the San Diego Supercomputer Center) that media costs are 1/3 of the total cost of ownership.
The graph that the model generates shows that cloud storage is competitive with local storage only if (a) its costs are dropping at least at the same rate as local storage, and (b) both costs are dropping at rates above 30%/yr. Neither is currently the case. If we use the historical 3%/yr at which S3's prices have dropped, and the current disk industry projection of 20%/yr, then the endowment needed for cloud storage is 5 times greater than that needed for local storage.

UPDATE 3 Sep 12: As I was working to apply our prototype model to Amazon's recently announced Glacier archival storage service, I found two bugs in the simulation that produced the graph above. Fortunately, they don't affect the point I was making, which is that S3 is too expensive for long-term storage, because both tended to under-estimate how expensive it was. Here is a corrected graph, which predicts that S3 would not be competitive with local storage at any Kryder rate.

7 comments:

  1. Hmmm. Cost numbers without any associated availability/integrity numbers don't seem all that useful. (After all, if I don't care about availability or data loss, why do I need RAID?) Presumably S3 offers better availability due to multiple locations with no shared SPOFs (except for software).

    ReplyDelete
  2. S3 appears to maintain 3 geographically separated replicas. They say:

    "Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region. To help ensure durability, Amazon S3 PUT and COPY operations synchronously store your data across multiple facilities before returning SUCCESS."

    and:

    "Designed to sustain the concurrent loss of data in two facilities."

    In order for the local storage model to be comparable with S3, the model I used includes 3 geographically separated replicas. I said:

    "Maintaining three copies in RAID-6 local storage"

    I should have added "geographically separated" but that doesn't affect the numbers.

    I don't agree that S3 offers better reliability than the model I used for local storage. It is possible that it would offer better availability, as you suggest, due to better management and infrastructure. But availability rather than reliability just isn't that relevant to digital preservation.

    ReplyDelete
  3. What about physical integrity and safety? I probably don't want to keep my RAID-6 copies in the closet at home and the homes of some geographically distant friends. Theft, flood, fire, earthquake, war, neglect,... Large datacenters like Amazon's have substantial physical security and safety measures. What fraction of S3's costs that is due to those measures?

    ReplyDelete
  4. Thanks for the comment, Fernando, but I am not assuming SOHO technology here. For the alternative to S3, I am using BackBlaze's build costs for the 4U rackmount storage servers they use in their Petabyte scale data centers. Hands up anyone who has a 4U rackmount at home.

    And I am assuming that those build costs represent 1/3 of the total cost of ownership. The other 2/3 represents the costs of operating in the San Diego Supercompter Center's Petabyte scale data center, as reported in their paper on SDSC's storage cost history. This proportion (1/3 hardware, 2/3 data center costs) roughly matches the numbers reported by Vijay Gill for Google's data centers. Does Google spend enough on physical security and safety for your requirements? More to the point, do they spend as much as Amazon?

    Of course, it is true that if you have much less to store than the 135TB example here, you might well decide to use SOHO technology, say putting say a Drobo at a couple of your friends' houses. And they would be much less secure than in a data center. But they would also be even cheaper. You certainly wouldn't spend twice as much running them as you did buying them. So you could afford a much higher level of replication and still come out ahead of S3. That's the basic LOCKSS concept. The more copies you have the less care you have to take with each copy.

    ReplyDelete
  5. 100 years is a long time; the only technology that I'd count on lasting that long is paper.

    100 years is also a long time to assume that Amazon will be in business. What if you had pinned your first hopes on Compuserve or AOL?

    There's a wide range of costs depending on how much and how fast you need access to the data. I've read about an architecture called MAID (massive array of idle disks) where you spin down most of your storage and spin it back up again only when you need it. Other non-RAID architectures would be worth looking at if your access times can be longer.

    Another unanticipated cost is the cost of securing this data against active attack; it's very different if you want a read-only legacy of boring data vs. a read/write collection of data of value to motivated hacker.

    ReplyDelete
  6. Thank you, Edward.

    As regards my use of 100 years, this is just a way of discussing storing data "for ever" and how assumptions about the future cost of storage technologies affect the projected cost of doing so. I do not assume that any one technology (or company) will survive for more than a small fraction of that time. Here, among other places, you can find a discussion of the way my model follows a unit of data as it migrates from technology to technology or, equivalently, from provider to provider.

    I'm well aware of technologies such as MAID, having co-authored a paper on data lifetimes on idle disks (PDF). They don't make a significant difference to the results I'm discussing here.

    I'm also well aware that designing systems to resist attack typically adds cost.

    ReplyDelete
  7. I should ahve found this earlier. Back in January Amar Kapadia posted a very informative, detailed analysis of OpenStack Swift vs. S3 costs.

    ReplyDelete