Tuesday, June 30, 2015

Blaming the Victim

The Washington Post is running a series called Net of Insecurity. So far it includes:
  • A Flaw In The Design, discussing the early history of the Internet and how the difficulty of getting it to work at all and the lack of perceived threats meant inadequate security.
  • The Long Life Of A Quick 'Fix', discussing the history of BGP and the consistent failure of attempts to make it less insecure, because those who would need to take action have no incentive to do so.
  • A Disaster Foretold - And Ignored,  discussing L0pht and how they warned a Senate panel 17 years ago of the dangers of Internet connectivity but were ignored.
Perhaps a future article in the series will describe how successive US administrations consistently strove to ensure that encryption wasn't used to make systems less insecure and, the encryption that was used was as weak as possible. They prioritized their (and their opponents) ability to spy over mitigating the risks that Internet users faced, and they got what they wanted. As we see with the compromise of the Office of Personnel Management and the possibly related compromise of health insurers including Anthem. These breaches revealed the kind of information that renders everyone with a security clearance vulnerable to phishing and blackmail. Be careful what you wish for!

More below the fold.

The compromises at OPM and at Sony Pictures have revealed some truly pathetic security practices at both organizations, which certainly made the bad guy's job very easy. Better security practices would undoubtedly have made their job harder. But it is important to understand that in a world where Kaspersky and Cisco cannot keep their systems secure, better security practices would not have made the bad guy's job impossible.

OPM and Sony deserve criticism for their lax security. But blaming the victim is not a constructive way of dealing with the situation in which organizations and individuals find themselves.

Prof. Jean Yang of C-MU has a piece in MIT Technology Review entitled The Real Software Security Problem Is Us that, at first glance, appears to make a lot of sense but actually doesn't. Prof. Yang specializes in programming languages and is a "cofounder of Cybersecurity Factory, an accelerator focused on software security". Writing:
we could, in the not-so-distant future, actually live in a world where software doesn’t randomly and catastrophically fail. Our software systems could withstand attacks. Our private social media and health data could be seen only by those with permission to see it. All we need are the right fixes.
A better way would be to use languages that provide the guarantees we need. The Heartbleed vulnerability happened because someone forgot to check that a chunk of memory ended where it was supposed to. This could only happen in a programming language where the programmer is responsible for managing memory. So why not use languages that manage memory automatically? Why not make the programming languages do the heavy lifting?
Another way would be to make software easier to analyze. Facebook had so much trouble making sense of the software it used that it created Hack and Flow, annotated versions of PHP and Javascript, to make the two languages more comprehensible.
...
Change won’t happen until we demand that it happens. Our software could be as well-constructed and reliable as our buildings. To make that happen, we all need to value technical soundness over novelty. It’s up to us to make online life is as safe as it is enjoyable.
It isn't clear who Prof. Yang's "we" is, end users or programmers. Suppose it is end users. Placing the onus on end users to demand more secure software built with better tools is futile. There is no way for an end user to know what tools were used to build a software product, no way to compare how secure two software products are, no credible third-party rating agency to appeal to for information. So there is no way for the market to reward good software engineering and punish bad software engineering.

Placing the onus on programmers is only marginally less futile. No-one writes a software product from scratch from the bare metal up. The choice of tools and libraries to use is often forced, and the resulting system will have many vulnerabilities that the programmer has no control over. Even if the choice is free, it is an illusion to believe that better languages are a panacea for vulnerabilities. Java was designed to eliminate many common bugs, and it manages memory. It was effective in reducing bugs, but it could never create a "world where software doesn’t randomly and catastrophically fail".

Notice that the OPM compromise used valid credentials presumably from social engineering, so it would have to be blamed on system administrators not programmers, or rather on management's failure to mandate two-factor authentication. But equally, even good system administration couldn't make up for Cisco's decision to install default SSH keys for "support reasons".

For a more realistic view, read A View From The Front Lines, the 2015 report from Mandiant, a company whose job is to clean up after compromises such as the 2013 one at Stanford. Or Dan Kaminsky's interview with Die Zeit Online in the wake of the compromise at the Bundestag:
No one should be surprised if a cyber attack succeeds somewhere. Everything can be hacked. ...  All great technological developments have been unsafe in the beginning, just think of the rail, automobiles and aircrafts. The most important thing in the beginning is that they work, after that they get safer. We have been working on the security of the Internet and the computer systems for the last 15 years.
Yes, automobiles and aircraft are safer but they are not safe. Cars kill 1.3M and injure 20-50M people/year, being the 9th leading cause of death. And that is before they become part of the Internet of Things and their software starts being exploited. Clearly, some car crash victims are at fault and others aren't. Dan is optimistic about Prof. Yang's approach:
It is a new technology, it is still under development. In the end it will not only be possible to write a secure software, but also to have it happen in a natural way without any special effort, and it shall be cheap.
I agree that the Langsec approach and capability-based systems such as Capsicum can make systems safer. But making secure software possible is a long way from making secure software ubiquitous. Until it is at least possible for organizations to deploy a software and hardware stack that is secure from the BIOS to the user interface, and until there is liability on the organization for not doing so, blaming them for being insecure is beside the point.

The sub-head of Mandiant's report is:
For years, we have argued that there is no such thing as perfect security. The events of 2014 should put any lingering doubts to rest.
It is worth reading the whole thing, but especially their Trend 4, Blurred Lines, that starts on page 20. It describes how the techniques used by criminal and government-sponsored bad guys are becoming indistinguishable, making difficult not merely to defend against the inevitable compromise, but to determine what the intent of the compromise was.

The technology for making systems secure does not exist. Even if it did it would not be feasible for organizations to deploy only secure systems. Given that the system vendors bear no liability for the security of even systems intended to create security, this situation is unlikely to change in the foreseeable future.

2 comments:

David. said...

Via Bruce Schneier Ars Technica reports that attackers are compromising the boot loader of Cisco routers. Security depends on the integrity of the entire stack, not just your program, so blaming the programmer isn't helpful in this case.

David. said...

Its not just Cisco that thinks that hardwired administrative backdoors into network products are a big help for their support staff. Who could possibly have guessed that a user called "root" with password "root" could telnet into your Seagate WiFi disk.