tag:blogger.com,1999:blog-4503292949532760618.post5172729489194908542..comments2020-10-19T06:59:04.169-07:00Comments on DSHR's Blog: Talk at SeagateDavid.http://www.blogger.com/profile/14498131502038331594noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-4503292949532760618.post-41733003594399300072016-05-06T10:47:53.807-07:002016-05-06T10:47:53.807-07:00The 17M racks/day the SKA would have needed in 196...The 17M racks/day the SKA would have needed in 1966 is equivalent to over 10 square kilometer/day.David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-66881900553177211312016-05-03T09:49:22.779-07:002016-05-03T09:49:22.779-07:00Looking at the para beginning "Let's cons...Looking at the para beginning "Let's consider a service whose data grows at 25% per year..." (from memory, sorry). By the end of 10 years the total data is about 7.5 times the original data load. You suggest that at some point the old data won't be worth keeping; that might be true (but would depend on issues related to the actual data value). The trouble is that the original data after 10 years only costs 3.5% of the total capital so far for its second space renewal, although that original data would cost 27% of new storage bought that year (not to be sneezed at!). By year 15, that original data only costs 9% of the data renewal cost (although 10-year-old data at that point would be 22% of the renewal cost).<br /><br />The point is, it's recent data that's your big problem, not old data, when exponential growth is occurring. This point was rammed home to us by the mathematicians in the Institute of technology where I worked in about 1980, when we started a program of weeding files based on age, to save on new storage costs. Needless to say, we changed tack!Chris Rusbridgehttps://www.blogger.com/profile/06087447503626434385noreply@blogger.com