Tuesday, February 27, 2018

"Nobody cared about security"

There's a common meme that ascribes the parlous state of security on the Internet to the fact that in the ARPAnet days "nobody cared about security". It is true that in the early days of the ARPAnet security wasn't an important issue; everybody involved knew everybody else face-to-face. But it isn't true that the decisions taken in those early days hampered the deployment of security as the Internet took the shape we know today in the late 80s and early 90s. In fact the design decisions taken in the ARPAnet days made the deployment of security easier. The main reason for today's security nightmares is quite different.

I know because I was there, and to a small extent involved. Follow me below the fold for the explanation.

Making the original ARPAnet work at all was a huge achievement. It was, to a large extent, made possible because its design was based on the End-to-End Principle:
it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, especially when the latter are beyond the control of, and not accountable to, the former.
This principle allowed for the exclusion from the packet transport layer of all functions not directly related to getting the packets from their source to their destination. Simplifying the implementation by pushing functions to the end hosts made designing and debugging the network much easier. Among the functions that were pushed to the end hosts was security. Had the design of the packet transport layer included security, and thus been vastly more complex, it is unlikely that the ARPAnet would ever have worked well enough to evolve into the Internet.

Thus a principle of the Internet was that security was one of the functions assigned to network services (file transfer, e-mail, Web, etc.), not to the network on which they operated.

In the long run, however, the more significant reason why the ARPAnet and early Internet lacked security was not that it wasn't needed, nor that it would have made development of the network harder, it was that implementing security either at the network or the application level would have required implementing cryptography. At the time, cryptography was classified as a munition. Software containing cryptography, or even just the hooks allowing cryptography to be added, could only be exported from the US with a specific license. Obtaining a license involved case-by-case negotiation with the State Department. In effect, had security been a feature of the ARPAnet or the early Internet, the network would have to have been US-only. Note that the first international ARPAnet nodes came up in 1973, in Norway and the UK.

Sometime in the mid-80s Unix distributions, such as Berkeley Unix, changed to eliminate cryptography hooks and implementations from versions exported from the US. This actually removed even the pre-existing minimal level of security from Unix systems outside the US. People outside the US noticed this, which had some influence on the discussions of export restrictions in the following decade.

Commercial domestic Internet started to become available in 1989 in a few areas:
The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.
It wasn't widely available until the mid-90s.

In 1991 Phil Zimmerman released PGP. The availability of PGP outside the US was considered a violation of the Arms Export Control Act. The result was:
a grand jury in San Jose, Calif., has been gathering evidence since 1993, pondering whether to indict Zimmermann for violating a federal weapons-export law--a charge that carries a presumptive three-to-five-year sentence and a maximum $1 million fine. The investigation is being led by Silicon Valley Assistant U.S. Attorney William P. Keane; a grand jury indictment must be authorized by the Justice Department in Washington.
In 1996 the investigation was dropped without filing any charges. But it meant that in the critical early days of mass deployment of Internet services everyone developing the Internet and its services knew that they were potentially liable for severe penalties if they implemented cryptography and it appeared overseas without a license.

Getting a license got a little easier in 1992:
In 1992, a deal between NSA and the [Software Publishers Association] made 40-bit RC2 and RC4 encryption easily exportable using a Commodity Jurisdiction (which transferred control from the State Department to the Commerce Department).
Exporting encryption software still needed a license. It was easier to get, but only for encryption everyone understood was so weak as to be almost useless. Thus as the Internet started taking off US developers of Internet applications faced a choice:
  • Either eliminate cryptography from the product,
  • Or build two versions of the product, the Export version with 40-bit Potemkin encryption, and the Domestic version with encryption that actually provided useful security.
Starting in 1996, the export restrictions were gradually relaxed, though they have not been eliminated. But as regards Internet applications, by 2000 it was possible to ship a single product with effective encryption.

The first spam e-mail was sent in 1978 and evoked this reaction:
ON 2 MAY 78 DIGITAL EQUIPMENT CORPORATION (DEC) SENT OUT AN ARPANET MESSAGE ADVERTISING THEIR NEW COMPUTER SYSTEMS. THIS WAS A FLAGRANT VIOLATION OF THE USE OF ARPANET AS THE NETWORK IS TO BE USED FOR OFFICIAL U.S. GOVERNMENT BUSINESS ONLY. APPROPRIATE ACTION IS BEING TAKEN TO PRECLUDE ITS OCCURRENCE AGAIN.
Which pretty much fixed the problem for the next 16 years. But in 1994 lawyers Canter & Siegel spammed the Usenet with an advertisement for their "green card" services, and that December the first commercial e-mail spam was recorded. Obviously, IMAP and SMTP needed security. That month John Gardiner Myers had published RFC1731 for IMAP, and by the following April had published a draft of what became RFC2554. So, very quickly after the need became apparent, a technical solution became available. Precisely because of the end-to-end principle, the solution was constrained to e-mail applications, incrementally deployable, and easily upgraded as problems or vulnerabilities were discovered.

Mosaic, the browser that popularized the Web, was first released in January 1993. It clearly needed secure communication. In SSL and TLS: Theory and Practice Rolf Oppliger writes:
Eight months later, in the middle of 1994, Netscape Communications already completed the design for SSL version 1 (SSL 1.0). This version circulated only internally (i.e., inside Netscape Communications), since it had several shortcomings and flaws. For example, it didn't provide data integrity protection. ... This and a few other problems had to be resolved, and at the end of 1994 Netscape Communications came up with SSL version 2 (SSL 2.0).
SSL stands for the Secure Sockets Layer. SSL 3.0, the final version, was published in November 1996. So, very quickly after the need became evident, Netscape built Export versions of SSL and used them to build Export versions of Mosaic. SSL could clearly be used to protect e-mail as well as Web traffic, but doing so involved exporting cryptography.

In order to be usable outside the US, both Web browsers and Web servers had to at least support 40-bit SSL. The whole two-version development, testing and distribution process was a huge hassle, not to mention that you still needed to go through the export licensing process. So many smaller companies and open source developers took the "no-crypto" option, as Unix had in the 80s. Among them was sendmail, the dominant SMTP software from the Internet. Others took different approaches. OpenBSD arranged for all crypto-related work to be done in Canada, so that crypto was imported rather than exported at the US border.

Thus, for the whole of the period during which the Internet was evolving from an academic network into the world's information infrastructure it was impossible, at least for US developers, to deploy comprehensive security for legal reasons. It wasn't that people didn't care about security, it was because they cared about staying out of jail.

3 comments:

David. said...

I just discovered that Jim Gettys reinforced this post two months later with Mythology about security…. He recounts trying to get encryption into the X Window System:

"We asked MIT whether we could incorporate Kerberos (and other encryption) into the X Window System. According to the advice at the time (and MIT’s lawyers were expert in export control, and later involved in PGP), if we had even incorporated strong crypto for authentication into our sources, this would have put the distribution under export control, and that that would have defeated X’s easy distribution. The best we could do was to leave enough hooks into the wire protocol that kerberos support could be added as a source level “patch” (even calls to functions to use strong authentication/encryption by providing an external library would have made it covered under export control)."

David. said...

Misleading headline alert! Bombshell Report Finds Phone Network Encryption Was Deliberately Weakened by Lorenzo Franceschi-Bicchierai reports:

"A weakness in the algorithm used to encrypt cellphone data in the 1990s and 2000s allowed hackers to spy on some internet traffic, according to a new research paper.

The paper has sent shockwaves through the encryption community because of what it implies: The researchers believe that the mathematical probability of the weakness being introduced on accident is extremely low. Thus, they speculate that a weakness was intentionally put into the algorithm. After the paper was published, the group that designed the algorithm confirmed this was the case."

Give me a break! The paper's abstract concludes:

"This unusual pattern indicates that the weakness is intentionally hidden to limit the security level to 40 bit by design."

Anyone who knows the history of encryption technology is definitely not suffering "shockwaves" from this "bombshell". Unless the algorithm had been limited to 40 bit security, it would have been effectively impossible to deploy for the reasons explained in this post.

David. said...

In Researchers: 2G Connection Encryption Deliberately Weakened To Comply With Cryptowar Export Restrictions Tim Cushing gets the story right and adds:

"But even though 2G networks haven't been in common use since the early 2000's, this weakness (which still exists) still has relevance. One of the features of Stingray devices and other cell site simulators is the ability to force all connecting phones to utilize a 2G connection.

Handsets operating on 2G will readily accept communication from another device purporting to be a valid cell tower, like a stingray. So the stingray takes advantage of this feature by jamming the 3G and 4G signals, forcing the phone to use a 2G signal.

This means anyone using a cell site simulator can break the weakened encryption and intercept communications or force connecting devices to cough up precise location data."