Last month marked the 60th anniversary of plastic digital magnetic tape storage.
The IBM 726 digital tape drive was introduced in 1952 to provide larger amounts of digital storage for ... IBM’s 701 computer.As Tom Coughlin points out:
The current generation, LTO 5, has 1.5 TB of native storage capacity. According to Fujifilm and others in the industry, magnetic tape technology can eventually support storage capacities of several 10’s of TB in one cartridge. Much of these increases in storage capacity will involve the introduction of technologies pioneered in the development of magnetic disk drives.While disk drives are coming to the end of the current generation of technology, Perpendicular Magnetic Recording (PMR), tape is still using the previous generation (GMR). So tape technology is about 8 years behind disk, making this forecast is quite credible. And it is possible that by about 2020 the problems of Heat Assisted Magnetic Recording (HAMR) will have been sorted out so that the transition to it will be less traumatic than it is being for the disk industry.
Below the fold I look at disk.
Buzzfeed has a nice infographic showing the consolidation of disk drive manufacturing. Notice how small Toshiba's share of the market is; there are really only two companies left. I predicted a year ago that this would result in increased margins and less rapid reduction in cost per byte. I was right. The spike in disk prices caused by the Thai floods has receded, but prices are still about 60% above pre-flood levels, let alone the levels that would have been expected absent the floods. But look at what has happened to margins:
WD and Seagate both reported record profits this past quarter. In Q1 2011, Western Digital reported net profit of $146M against sales of $2.3B while Seagate recorded $2.7B in revenue and $93 million in net income. That’s a net profit margin of 6% and 3%, respectively. For this past quarter, Western Digital reported sales of $3B (thanks in part to its acquisition of Hitachi) and a net income of $483 million, while Seagate hit $4.4B in revenue and $1.1B in profits. Net margin was 16% and 37% respectively.As Joel Kruska says:
With profit margins like this, the hard drive manufacturers are going to be loath to cut prices. After years of barely making profits, the Thailand floods are the best excuse ever to drive record income for a few quarters.
|Source: IHS iSuppli|
Finally, Simon Sharwood at The Register has an interesting post on the problems of getting large amounts of data into the cloud. The bandwidth available means that uploading rates are inadequate, optimistically 8hrs/TB. He recounts:
The Register spoke to one cloudy migrant who (after requesting anonymity) told us they borrowed a desktop network attached storage (NAS) device from their new cloud provider, bought another, uploaded data to the devices and then despatched a staffer on a flight to the cloud facility. The NASes were carry-on luggage and the travelling staffer cradled them on their lap during the flight.Amazon and Rackspace, among others, formalise such arrangements:
Amazon Web Services' import/export service was among the first such services and offers the chance to ingest up to 16TB of data, provided it is no more than 14 inches high by 19 inches wide by 36 inches deep (8Us in a standard 19 inch rack) and weighs less than 50 pounds.These services might help those who are determined to throw money at the storage problem by using cloud storage. The extent to which they are throwing money can be seen from Backblaze's blog. They document their build cost, admittedly from before the floods, for a 135TB 4U storage pod at under $8K. Applying a 60% increase to the disk cost changes this to $11.2K. Storing 135TB in S3's Reduced Reliability Storage (S3-RRS) costs more than $10K the first month. Backblaze claims that their 3-year cost of ownership of a Petabyte is under $100K whereas a Petabyte in S3-RRS costs almost $2.5M.
But the enthusiasts for cloud storage should stop to think that shipping storage pods is not only needed to get stuff in to the cloud, it is also needed to get stuff out of the cloud if, for example, you decide that you can't any longer afford the rent.
One interesting sidelight on Backblaze's numbers is that if it costs them $8K to build 135TB of storage, and their 3-year cost of ownership is $100K, then non-hardware costs such as power, cooling, space, staff and so on are about 40% of the total cost of ownership, with hardware costs about 60%.
This is a pretty much the inverse of the numbers we have been using from San Diego Supercomputer Center and Google, which say that hardware is about 1/3 of the 3-year cost of ownership, and non-hardware costs are 2/3.
Post a Comment