Erasure coding is a very important technique for long-term storage, as it can significantly reduce the amount of raw storage needed to provide a given level of reliability. There were two papers on erasure coding:
- Fast Erasure Coding for Data Storage: A Comprehensive Study of the Acceleration Techniques by Zhou et al studies ways to accelerate the computations of erasure coding:
Various techniques have been proposed in the literature to improve erasure code computation efficiency, including optimizing bitmatrix design, optimizing computation schedule, common XOR operation reduction, caching management techniques, and vectorization techniques. These techniques were largely proposed individually previously, and in this work, we seek to use them jointly. In order to accomplish this task, these techniques need to be thoroughly evaluated individually, and their relation better understood. Building on extensive test results, we develop methods to systematically optimize the computation chain together with the underlying bitmatrix.
This is useful work but the computation load of erasure codes isn't a factor in long-term storage. - OpenEC: Toward Unified and Configurable Erasure Coding Management in Distributed Storage Systems by Li et al addresses the issue of how improved erasure codes can be deployed in practice:
integrating new erasure coding solutions into existing distributed storage systems is a challenging task and requires non-trivial re-engineering of the underlying storage workflows. We present OpenEC, a unified and configurable framework for readily deploying a variety of erasure coding solutions into existing distributed storage systems. OpenEC decouples erasure coding management from the storage workflows of distributed storage systems, and provides erasure coding designers with configurable controls of erasure coding operations through a directed-acyclic-graph-based programming abstraction.
Source |
The major difficulty in modeling and designing long-term storage systems is that the failures against which protection is required are often correlated in ways that are hard to discover or predict. If, as the Backblaze data show, different drive models exhibit different failure patterns, the correlation between them will be low. Kadekodi et al observe that:
Despite such differences, the degree of redundancy employed in cluster storage systems for the purpose of long term data reliability (e.g., the degree of replication or erasure code parameters) is generally configured as if all of the devices have the same reliability. Unfortunately, this approach leads to configurations that are overly resource-consuming,overly risky, or a mix of the two. For example, if the redundancy settings are configured to achieve a given data reliability target (e.g., a specific mean time to data loss (MTTDL))based on the highest AFR of any device make/model (e.g.,S-4 from Fig. 1), then too much space will be used for redundancy associated with data that is stored fully on lower AFR makes/models (e.g., H-4A). Continuing this example, our evaluations show that the overall wasted capacity can be up to 16% compared to uniform use of erasure code settings stated as being used in real large-scale storage clusters [13, 25, 26, 28] and up to 33% compared to using 3-replication for all data — the direct consequence is increased cost, as more disks are needed. If redundancy settings for all data are based on lower AFRs, on the other hand, then data stored fully on higher-AFR devices is not sufficiently protected to achieve the data reliability target.Their HeART (Heterogeneity-Aware Redundancy Tuner) system is:
an online tool for guiding exploitation of reliability heterogeneity among disks to reduce the space overhead (and hence the cost) of data reliability. HeART uses failure data observed over time to empirically quantify each disk group’s reliability characteristics and determine minimum-capacity redundancy settings that achieve specified target data reliability levels. For the Backblaze dataset of 100,000+HDDs over 5 years, our analysis shows that using HeART’s settings could achieve data reliability targets with 11–33% fewer HDDs, depending on the baseline one-scheme-for-all settings. Even when the baseline scheme is a 10-of-14 erasure code whose space-overhead is already low, HeART further reduces disk space used by up to 14%.
Source |
Backblaze is out with their latest hard drive stats, and Chris Mellor has commentary on them.
ReplyDelete