tag:blogger.com,1999:blog-4503292949532760618.post2959390919582691671..comments2024-03-28T13:39:27.601-07:00Comments on DSHR's Blog: SSD vs. HDD (Updated)David.http://www.blogger.com/profile/14498131502038331594noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-4503292949532760618.post-26171048120384783742019-09-30T07:35:49.813-07:002019-09-30T07:35:49.813-07:00AT The Register, industry veteran Hubbert Smith ma...AT <i>The Register</i>, industry veteran <a href="https://blocksandfiles.com/2019/09/27/using-qlc-for-cold-storage-is-a-fools-errand/" rel="nofollow">Hubbert Smith makes a related point</a>:<br /><br />"QLC is only 25 per cent better capacity than TLC, and with every generation the industry trades slower and slower performance with poorer write endurance. With just 25 per cent better capacity than TLC, QLC shows diminishing returns."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-15642422060857862672019-09-28T15:32:14.082-07:002019-09-28T15:32:14.082-07:00In SSDs are on track to get bigger and cheaper tha...In <a href="https://arstechnica.com/gadgets/2019/09/new-intel-toshiba-ssd-technologies-squeeze-more-bits-into-each-cell/" rel="nofollow"><i>SSDs are on track to get bigger and cheaper thanks to PLC technology</i></a> reports that Intel and Toshiba are announcing 5 bits/cell NAND technology, promising (somewhat less than) 25% improvement in density over QLC flash. He points out one of the downsides of PLC:<br /><br />"Unfortunately, while PLC SSDs will likely be bigger and cheaper, they'll probably also be slower. Modern SSDs mostly use TLC storage with a small layer of SLC write cache. As long as you don't write too much data too fast, your SSD writes will seem as blazingly fast as your reads—for example, Samsung's consumer drives are rated for up to 520MB/sec. But that's only as long as you keep inside the relatively small SLC cache layer; once you've filled that and must write directly to the main media in real time, things slow down enormously."<br /><br />Another downside is that the error rate of PLC will be worse than QLC, necessitating more bits devoted to error correction. A third downside is that endurance will be reduced, meaning more of the write bandwidth is consumed by internal refresh cycles. So, overall, the potential improvement is likely less than 20%. The effect on the nearline layer will thus likely be relatively small.David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-73933950477629787122019-09-09T16:23:38.276-07:002019-09-09T16:23:38.276-07:00Chris Mellor's Amazon drops infrequent access ...Chris Mellor's <a href="https://blocksandfiles.com/2019/09/06/amazon-drops-infrequent-access-file-storage-prices-by-44-per-cent/" rel="nofollow"><i>Amazon drops infrequent access file storage prices</i></a> reports that:<br /><br />[AWS' Steve Roberts] cited “Industry analysts such as IDC, and our own analysis of usage patterns confirms, that around 80 per cent of data is not accessed very often. The remaining 20 per cent is in active use.”<br /><br />AWS cut prices on EFS IA by 44% when Lifecycle Management is used to automate moving files that haven't been accessed recently from Standard to Infrequent Access. The two tiers are transparent to applications, but:<br /><br />"The data remains accessible within the same file system namespace albeit with a slightly higher latency; double digit ms vs single digit ms"<br /><br />This is probably the difference between SSD and HDD. Standard costs $0.30/GB-month, and IA costs $0.025/GB-month, or 8.3% of Standard. If this were solely due to the hardware cost difference between SSD and HDD, SSD would cost 12x HDD, so this is plausible.David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-39270846372706449562019-09-08T12:48:17.884-07:002019-09-08T12:48:17.884-07:00IOPs measure the latency of small, random transfer...IOPs measure the latency of small, random transfers. SSDs are good at this, which is why they live in a tier <b>above</b> the nearline tier of the storage hierarchy. If the nearline tier of your storage hierarchy is primarily serving small, random transfers there something seriously wrong with your storage system's design. IOPs/$ or IOPs/TB are not useful criteria for the nearline tier, since it should not be serving lots of small random I/Os. Ideally, its workload should be mostly large, contiguous writes, making write bandwidth the important criterion.<br /><br />Also, see Chris Mellor's <a href="https://www.theregister.co.uk/2019/08/05/seagate_spins_off_a_bit_of_cash_from_slowing_disk_drive_business/" rel="nofollow"><i>Seagate spins off a bit of cash from slowing disk drive business</i></a>:<br /><br />"Seagate's MACH.2 dual-actuator tech will begin shipping later this calendar year, starting "around" the 20TB capacity point, the firm's CEO Dave Mosley has confirmed.<br /><br />Competitor Western Digital is also developing dual-actuator technology to increase disk drive IO rates."David.https://www.blogger.com/profile/14498131502038331594noreply@blogger.comtag:blogger.com,1999:blog-4503292949532760618.post-51215592294115031882019-09-08T06:27:57.432-07:002019-09-08T06:27:57.432-07:00«18 and 20TB drives»
Disk drives like that still ...«18 and 20TB drives»<br /><br />Disk drives like that still have only one arm, so IOPS-per-TB are terrible. In practice I reckon that drives over 2TB are best thought as sequential tapes capable of quick but infrequent random access, rather than random access devices. Flash SSD instead have so many IOPS that even a large number of TBs of capacity still result in good IOPS-per-TB ratios.Blissex2https://www.blogger.com/profile/05849329792782072250noreply@blogger.com