Tuesday, September 21, 2010

How Green Is Digital Preservation?

At iPRES 2010 I was on a panel chaired by Neil Grindley of JISC entitled How Green is Digital Preservation?. Each of the panelists gave a very brief introduction; below the fold is an edited version of mine.


Not Green Right Now

Here are numbers from Vijay Gill, who runs Google's internal network, on the components of the 3-year cost of owning a bunch of servers in a co-location data center. Space, power and cooling, mostly power and cooling, are 58% of the total. Even if you keep all your preservation copies on tape, simply keeping one access copy online has a significant carbon footprint.

One saving grace has been that disks have been exponentially increasing their capacity, at constant cost and constant power, for a long time. Thus, accumulating preserved content has not meant increasing carbon emissions. Unfortunately, it seems likely that this exponential decrease in dollars and watts per byte will stop in the near future.

But the good news is that these are big problems for the IT industry as a whole, and solving big problems for the IT industry is a way to get very, very rich. So people work at solving them.

Change Is On The Way

One of the papers that gained a well-deserved place in last year's SOSP workshop described FAWN, the Fast Array of Wimpy Nodes. It described a system consisting of a large number of very cheap, low-power nodes each containing some flash memory and the kind of system-on-a-chip found in consumer products like home routers. It compared a network of these to the PC-and-disk based systems companies like Google and Facebook currently use. The FAWN network could answer the same kinds of queries that current systems do at the same speed while consuming two orders of magnitude less power.

"FAWN couples low-power embedded CPUs to small amounts of local flash storage, and balances computation and I/O capabilities to enable efficient, massively parallel access to data. ... FAWN clusters can handle roughly 350 key-value queries per Joule of energy - two orders of magnitude more than a disk-based system"
"... small-object random-access workloads are ... ill-served by conventional clusters ... two 2GB DIMMs consume as much energy as a 1TB disk. The power draw of these clusters is ... up to 50% of the three-year total cost of owning a computer."
David Andersen et al. "FAWN A Fast Array of Wimpy Nodes", SOSP, Oct. 2009
You very rarely see an engineering result two orders of magnitude better than the state of the art. I expect that the power reductions the use of FAWN-like systems provides will drive their rapid adoption in data centers. After all, eliminating half the 3-year cost of ownership is a big enough deal to be disruptive.

In case you don't believe either me or the FAWN team, here is startup SeaMicro's recent announcement of their first product, which uses 1 out of the 4 FAWN techniques and still achieves a 75% reduction in power consumption. They were written up in the New York Times in June.

Why You Get Performance And Energy Efficiency
Year Sec
1990 240
2000 720
2006 6450
2009 8000
2013 12800
This is a table that helps explain what is going on. Here, from an interesting article at ACM Queue, is a much prettier graph making the same point It plots the time it would take to read the entire content of a state-of-the-art disk against the year. It makes it clear that, although disks have been getting bigger rapidly, they haven't been getting correspondingly faster. In effect, the stored data has been getting further and further away from the code. There's a fundamental reason for this - the data rate depends on the inverse of the diameter of a bit, but the capacity depends on the inverse of the area of a bit.

The reason that FAWN-like systems can out-perform traditional PCs with conventional hard disks is that the bandwidth between the data and the CPU is so high and the amount of data per CPU so small that it can all be examined in a very short time.

Architectural Implications

There has been some reluctance to switch to Hadoop and similar software infrastructures so as to take advantage of large clusters of cheap computers. It was a lot of work, and it required giving up the familiar ACID properties of database systems. But recently, a team of researchers from Yale published a blog post discussing a paper (PDF) showing how a traditional database with ACID could be implemented on a cluster with competitive performance. This should remove the major barrier to realizing the energy-saving potential of FAWN.

This has important implications not just for the green-ness of digital preservation, but also for the architecture of digital preservation systems. Over time, systems have been growing more, bigger and more comprehensive indexes alongside the actual data. There are two reasons; conventional database systems encourage this, but more importantly as the data gets further and further away from the code the penalty for actually having to look at the data to answer a question becomes prohibitive.

The problem with this is that as the system gets bigger and more complex, the probability that the index exactly matches reality gets lower and lower. FAWN architecture systems, with a better balance between processing and storage, and with the storage very close to the code, can examine the data to answer a question instead of depending on indexes to get the speed they need. The answer will reflect reality at the time. Derived metadata such as indexes and format databases will no longer be necessary.

[Edited to reduce the width of the graph]

1 comment:

  1. I think Google has already implemented a solution for this, albeit an expensive one in the short run. They have covered the ample roof of their Mountain View office with solar PV panels. Ref:
    http://spectrum.ieee.org/energy/environment/the-greening-of-google

    Doing the same for data centers would mitigate much of the related current downside of conventional energy use.

    Lucky Balaraman

    ReplyDelete