Friday, September 11, 2015

Prediction: "Security will be an on-going challenge"

The Library of Congress' Storage Architectures workshop asked gave a group of us each 3 minutes to respond to a set of predictions for 2015 and questions accumulated at previous instances of this fascinating workshop. Below the fold, the brief talk in which I addressed one of the predictions. At the last minute, we were given 2 minutes more, so I made one of my own.

One of the 2012 Predictions was "Security will be an on-going challenge".

It might seem that this prediction was about as risky as predicting "in three years time, water will still be wet". But I want to argue that "an on-going challenge" is not an adequate description of the problems we now understand that security poses for digital preservation. The 2012 meeting was about 9 months before Edward Snowden opened our eyes to how vulnerable everything connected to the Internet was to surveillance and subversion.

Events since have greatly reinforced this message. The US Office of Personnel Management is incapable of keeping the personal information of people with security clearances secure from leakage or tampering. Sony Pictures and Ashley Madison could not keep their most embarrassing secrets from leaking. Cisco and even computer security heavyweight Kaspersky could not keep the bad guys out of their networks. Just over two years before the meeting, Stuxnet showed that even systems air-gapped from the Internet were vulnerable. Much more sophisticated attacks have been discovered since, including malware hiding inside disk drive controllers.

Dan Kaminsky was interviewed in the wake of the compromise at the Bundestag:
No one should be surprised if a cyber attack succeeds somewhere. Everything can be hacked. ... All great technological developments have been unsafe in the beginning, just think of rail, automobiles and aircraft. The most important thing in the beginning is that they work, after that they get safer. We have been working on the security of the Internet and the computer systems for the last 15 years.
Yes, automobiles and aircraft are safer but they are not safe. Cars kill 1.3M and injure 20-50M people/year, being the 9th leading cause of death. And that is before their software starts being exploited.

For a less optimistic view, read A View From The Front Lines, the 2015 report from Mandiant, a company whose job is to clean up after compromises such as the 2013 one that meant Stanford had to build a new network from scratch and abandon the old one. The sub-head of Mandiant's report is:
For years, we have argued that there is no such thing as perfect security. The events of 2014 should put any lingering doubts to rest.
The technology for making systems secure does not exist. Even if it did it would not be feasible for organizations to deploy only secure systems. Given that the system vendors bear no liability for the security of even systems intended to create security, this situation is unlikely to change in the foreseeable future. Until it is at least possible for organizations to deploy a software and hardware stack that is secure from the BIOS to the user interface, and until there is liability on the organization for not doing so, we have to assume the our systems will be compromised, the only questions being when, and how badly.

Our digital preservation systems are very vulnerable, but we don't hear reports of them being compromised. There are two possibilities. Either they have been and we haven't noticed, or it hasn't yet been worth anyone's time to do it.

In this environment the key to avoiding loss of digital assets is diversity, so that a single attack can't take out all replicas. Copies must exist in diverse media, in diverse hardware running diverse software under diverse administration. But this diversity is very expensive. Research has shown that the resources we have to work with suffice to preserve less than half the material that should be preserved. Making the stuff we have preserved safer means preserving less stuff.

To be fair, I should make a prediction of my own. If we're currently preserving less than half of the material we should, how much will we be preserving in 2020? Two observations drive my answer. The digital objects being created now are much harder and more expensive to preserve than those created in the past. Libraries and archives are increasingly suffering budget cuts. So my prediction is:
If the experiments to measure the proportion of material being preserved are repeated in 2020, the results will not be less than a half, but less than a third.

2 comments:

David. said...

Startup L. Jackson sums up the situation.

David. said...

The slides from the presentations at the workshop are now up here. Of special interest is the really excellent presentation by Robert Fontana of IBM, an essential source for the future of tape, disk and NAND flash storage technologies.