Below the fold I'd like to draw your attention to two papers and a post worth reading.
Cappello et al have published an update to their seminal 2009 paper Towards Exascale Resilience called Towards Exascale Resilience: 2014 Update. They review progress in some critical areas in the past five years. I've referred to the earlier paper as an example of the importance of, and the difficulty of, fault-tolerance at scale. As scale increases, faults become part of the normal state of the system; they cannot be treated as an exception. It is nevertheless disappointing that the update, like its predecessor, deals only with exascale computation not also with exascale long-term storage. Their discussion of storage is limited to the performance of short-term storage in checkpointing applications. This is a critical issue, but a complete exascale system will need a huge amount of longer-term storage. The engineering problems in providing it should not be ignored.
Dave Anderson of Seagate first alerted me to the fact that, in the medium term, manufacturing economics make it impossible for flash to displace hard disk as the medium for most long-term near-line bulk storage. Fontana et al from IBM Almaden have now produced a comprehensive paper, The Impact of Areal Density and Millions of Square Inches (MSI) of Produced Memory on Petabyte Shipments of TAPE, NAND Flash, and HDD Storage Class Memories that uses detailed industry data on flash, disk and tape shipments, technologies and manufacturing investments from 2008-2012 to reinforce this message. They also estimate the scale of investment needed to increase production to meet an estimated 50%/yr growth in data. The mismatch between the estimates of data growth and the actual shipments of media on which to store it is so striking that they are forced to cast doubt on the growth estimates. It is clear from their numbers that the industry will not make the mistake of over-investing in manufacturing capacity, driving prices, and thus margins, down. This provides significant support for our argument that Storage Will Be Much Less Free Than It Used To Be.
Henry Newman has a post up at Enterprise Storage entitled Ensuring the Future of Data Archiving discussing the software architecture that future data archives require. Although I agree almost entirely with Henry's argument, I think he doesn't go far enough. We need to fix the system, not just the software. I will present my, much more radical, view of future archival system architecture in a talk at the Library of Congress' Designing Storage Architectures workshop. The text will go up here in a few days.
No comments:
Post a Comment