Thursday, July 6, 2017

Archive vs. Ransomware

Archives perennially ask the question "how few copies can we get away with?"
This is a question I've blogged about in 2016 and 2011 and 2010, when I concluded:
  • The number of copies needed cannot be discussed except in the context of a specific threat model.
  • The important threats are not amenable to quantitative modeling.
  • Defense against the important threats requires many more copies than against the simple threats, to allow for the "anonymity of crowds".
I've also written before about the immensely profitable business of ransomware. Recent events, such as WannaCrypt, NotPetya and the details of NSA's ability to infect air-gapped computers should convince anyone that ransomware is a threat to which archives are exposed. Below the fold I look into how archives should be designed to resist this credible threat.

Background

Before looking at the range of defenses an archive could deploy, some background on the ransomware threat.

What Is Ransomware?

Ransomware is a class of malware which, once it infects a system, typically behaves as follows:
  • It searches the network for other systems to which it can spread, either because they have vulnerabilities the ransomware knows how to exploit, or because credentials for those systems are available on the infected system.
  • It encrypts all data writable by the infected system with a unique key, and reports the key and the system's ID to the ransomware's headquarters.
  • It informs the user that their data has been encrypted, and that the user can obtain a key to decrypt it by paying a ransom, typically in Bitcoin.
Some ransomware operations, Cerber is an example, have a sterling reputation for customer service, and if paid are highly likely to deliver a key that will permit recovery of the data. Others are less professional and, through bugs, incompetence, or a get-rich-quick business model may accept payment but be unable or unwilling to enable decryption. Of course, paying the ransom merely encourages the ransomware business, already worth by some estimates $75B/yr.

From the archive's point of view, ransomware is a similar threat to other forms of external or internal attacks, such as a disgruntled sysadmin, or a catastrophic operator error. The consequence of infection can be the total loss of all stored data. I'm just using ransomware as an example threat because it is timely and credible.

How Is Ransomware Delivered?

I've been asked "Archives don't have much money, so why would ransomware target one?" It is true that archives are less lucrative targets than FedEx, Maersk, SF Muni, Rosneft, WPP, the UK NHS and other recent victims of ransomware. But it is a misconception to think that ransomware is targeted at lucrative systems. For example, a nation might think that destroying the archive of another nation would be an appropriate way to express displeasure.

Like other forms of malware, ransomware is delivered not just by targeted means, such as phishing emails, but also by many different scattershot techniques including Web drive-bys, malicious advertising, compromised system updates, and in the case of WannaCry a network vulnerability. And, since recent ransomware exploits vulnerabilities from the NSA's vast hoard, it is exceptionally virulent. Once it gets a toehold in a network, it is likely to spread very rapidly.

Defenses

I now examine the various techniques for storing data to assess how well they defend against ransomware and related threats.

Single copy

Let us start by supposing that the archive has a single copy of the content in a filesystem on a disk. We don't need to invoke ransomware to know that this isn't safe. Failure of the disk will lose the entire archive; bit-rot affecting either or both of a file and its stored hash will corrupt that file.

Disk Mirror

One way to protect data is by directing each write to two identical disks, mirrors of each other. If one fails the data can be read from the other.

But when one fails the data is no longer protected, until the system is repaired. The safety of the data depends on the operator noticing the failure, replacing the failed disk, and copying the data from the good disk to the replacement before the good disk fails.

For well-administered systems the mean time to failure of a disk is long compared to the time between operators paying attention, so if disks failed randomly and independently it would be unlikely that the good disk would fail during repair. Alas, much field evidence shows that failures are significantly correlated. For example, raised temperatures caused by cooling system failure may cause disks to fail together.

Disk mirroring doesn't protect against ransomware; the writes the malware uses to encrypt the data get to both halves of the mirror.

Filesystem Backup

Another common technique is to synchronize a master copy with a slave copy in a different filesystem on a different disk. If there is a failure in the master, the system can fail-over, promoting the slave to be the new master and having the operator (eventually) create a new slave with which it can be synchronized.

Although this technique appears to provide two filesystems, the synchronization process ensures that corruption (or encryption by ransomware) of the master is rapidly propagated to the slave. Thus it provides no protection against bit-rot or ransomware. Further, because both the master and the slave filesystems are visible to (and writable by) the same system, once it is compromised both are at risk.

Network Backup

There are two ways data can be backed up over a network to a separate system, push and pull. In push backup, data is written to a network filesystem by the system being backed up. This is equivalent to backing up to a local filesystem. Ransomware can write to, so will encrypt, the data in the network filesystem.

Pull backup is better. The remote system has read access to the system being backed up, which has no write access to the network file system. Ransomware cannot immediately encrypt the backup, but the pull synchronization process will overwrite the backup with encrypted data unless it can be disabled in time.

Both mirroring and backup have a replication factor of two; they consume twice the storage of a single copy.

RAID

Disk mirroring is technically known as RAID 1. RAID N for N > 1 is a way to protect data from disk failures using a replication factor less than two. Disk blocks are organized as stripes of S blocks. Each stripe contains D data blocks and P parity blocks, where D+P=S. The data can be recovered from any D of the blocks, so the raid can survive P failures without losing data. For example, if D=4 and P=1, data is safe despite the loss of a single drive at a replication factor of 1.25.

RAID as such offers no protection against bit-rot. Some RAID systems provide the option of data scrubbing. If this is enabled, the RAID system uses a background task to identify individual bad blocks and repair them before they are detected as the result of a user read. Data scrubbing can prevent some forms of bit-rot, typically at the cost of some performance. Anecdotally it is rarely enabled.

But, since the content still appears in a single filesystem, any compromise of the system, for example by ransomware, risks total loss. As disk capacity has increased but disk transfer speed and the unrecoverable bit error rate (UBER) have not increased to match, the time needed after the operator has noticed a disk failure to fill the replacement disk and the size of the data transfer involved mean that single parity RAID (S-D=P=1) is no longer viable.

Erasure Coding

RAID is a form of erasure coding, but more advanced systems (such as IBM's Cleversafe) use erasure coding to spread the content across multiple systems in a network rather than multiple disks in a system. This can greatly reduce the correlation between media failures. Since the erasure-coded storage appears to applications as a filesystem, it provides no protection against ransomware or other application system compromises.

Two Independent Copies

Why is it that none of the approaches above defend against ransomware? The reason is that none provides independent replicas of the data. Each system has a single point from which the ransomware can encrypt all copies.

Suppose the archive maintains two independent copies, independent in the sense that they are separate in geographic, network and administrative terms. No-one has credentials allowing access to both copies. Although both copies may have been originally ingested from the same source, there is no place from which both copies can subsequently be written or deleted. Now the ransomware has to infect both replicas nearly simultaneously, before the operators notice and take the other replica off-line.

Three Independent Copies

Of themselves, 2 independent copies do not protect against more subtle corruption of the data than wholesale encryption. It is often assumed that storing hashes together with the data will permit detection of, and recovery from, corruption. But this is inadequate. As I wrote in SHA-1 is Dead:
There are two possible results from re-computing the hash of the content and comparing it with the stored hash:
  • The two hashes match, in which case either:
    • The hash and the content are unchanged, or
    • An attacker has changed both the content and the hash, or
    • An attacker has replaced the content with a collision, leaving the hash unchanged.
  • The two hashes differ, in which case:
    • The content has changed and the hash has not, or
    • The hash has changed and the content has not, or
    • Both content and hash have changed.
The stored hashes are made of exactly the same kind of bits as the content whose integrity they are to protect. The hash bits are subject to all the same threats as the content bits.
For example, if an attacker were to modify both the data and the hash on one of two replicas, the archive would be faced with two different versions each satisfying the hash check. Which is correct? With 3 independent copies, this can be decided by an election, with the replicas voting on which version is correct.

Alternatively, techniques based on entangling hashes in Merkle Trees can be used to determine which hash has been modified. These are related to, but vastly cheaper than, blockchain technologies for the same purpose. The problem is that the Merkle tree becomes a critical resource which must itself be preserved with multiple independent copies (making the necessary updates tricky). If ransomware could encrypt it the system would be unable to guarantee content integrity.

Four Independent Copies

If one of the three independent copies is unavailable, the voting process is unavailable. Four independent copies is the minimum number that ensure the system can survive an outage at one copy.

Lots of Independent Copies

Just as with disks, it turns out that outages among notionally independent copies are correlated. And that archives, unable to afford intensive staffing, are often slow to notice and respond to problems, lengthening the outages. Both make it more likely that more than one copy will be unavailable when needed to detect and recover from corruption.

Tape Backup

The traditional way to back up data was to a cycle of tapes. To over-simplify, say the cycle was weekly. Each day the data would be backed up to, and overwrite, the same day's tape from the previous week; a replication factor of 7 that was only affordable because tape was so cheap compared to disk. With this traditional approach ransomware would have to be a good deal cleverer. It would need to intercept the backups and encrypt them as they were written while delaying encryption of the disk itself for a whole backup cycle.

In practice things would be more complex. Writing to tape is slow, and tape is not that much cheaper than disk, so that complex cycles interleaving full and incremental backups are used. Generic ransomware would be unlikely to know the details, so would fail to destroy all the backups. But recovery would be a very slow and error-prone process, unlikely to recover all the data.

Write-Once Media Backup

One excellent way to defend against ransomware and many other threats (such as coronal mass ejections) is to back the data up to write-once optical media. Kestutis Patiejunas built such a system for Facebook, and it is in production use. But few if any archives operate at the scale needed to make these systems cost-effective.

How Does This Relate To LOCKSS?

Nothing in the foregoing is specific to the LOCKSS technology; it all applies to whatever technology an archive uses. The LOCKSS system was designed to cope with a broad range of threats, set out initially in a 2005 paper, and elaborated in detail for the 2014 TRAC audit of the CLOCKSS Archive. Although these threat models don't specifically call out ransomware, which wasn't much of a threat 3 years ago, they do include external attack, internal attack and operator error. All three have similar characteristics to ransomware.

Thus the LOCKSS Polling and Repair Protocol, the means by which peers in a LOCKSS network detect and repair damage such as encryption by ransomware, was designed to operate with at least 4 copies. Assuming that no copy is ever unavailable when needed is not realistic; as with any preservation technology 4 is the minimum for safety.

Our experience with operating peer-to-peer preservation networks of varying sizes in the LOCKSS Program led us to be comfortable with the ability of these networks with realistic levels of operator attention to detect damage to, and make timely repairs to, content provided they have 7 or more peers. As the number of peers decreases, the level of operator attention needed increases, so there is a trade-off between hardware and staff costs.

6 comments:

Thomas Lindgren said...


It does raise some interesting questions.

It may be difficult to eyeball that archived data have been ransomware encrypted, since they probably already are encrypted-at-rest. Checksum scripts will fail though.

The archive filesystem should be appropriately protected (mostly read-only) and the archiving code should be careful about what executables it is permitted to run. This reduces the attack surface but is of course a source of development/deployment trouble.

On the bright side, a backup system could easily detect that ransomware has been used on a client and alert admins. Two ways: file types no longer match, and sudden massive changes to client file system (huge numbers of files updated).

David. said...

Thank you, Thomas. Some good but not ultimately persuasive points.

Ransomware typically includes privilege escalation, so it runs as root. Read-only protection is not a defense, nor are checksum scripts which, besides being too late, will themselves have been encrypted along with the operating system.

If I were designing the ransomware it would preserve the file type (e.g. by not encrypting the first few bytes) and the modified time.

Huge numbers of files updated might not be ransomware, it might be a format migration.

David. said...

Russell Brandon's TWO-FACTOR AUTHENTICATION IS A MESS makes a number of important points about the effectiveness of 2FA in practice. It isn't a panacea; it is important.

David. said...

One comment I've heard several times is "Archives don't have money, so why would they be targets for ransomware?" At The Register, John Leyden has information on the differences between targeted and untargeted ransomware.

Joel Chornik said...

Using a snapshotted filesystem like zfs and keeping several local snapshots on top of remote ones seems like a solution with fast recovery time and decent protection

David. said...

A month later, San Francisco's public broadcaster KQED is still suffering the effects of a ransomware attack.