The fact that software vendors use licensing to disclaim liability for the functioning of their products is at the root of the lack of security in systems. These proposals are plausible but I believe they would either be ineffective or, more likely, actively harmful. There is so much to write about them that they deserve an entire post to themselves.Below the fold is the post they deserve. The recommendation in question states:
The US Congress should extend final goods assembler liability to operators of major open-source repositories, container managers, and app stores. These entities play a critical security governance role in administering large collections of open-source code, including packages, libraries, containers, and images. Governance of a repository like GitHub or an app hub like the PlayStore should include enforcing baseline life cycle security practices in line with the NIST Overlay, providing resources for developers to securely sign, distribute, and alert users for updates to their software. This recommendation would create a limited private right of action for entities controlling app stores, hubs, and repositories above a certain size to be determined. The right would provide victims of attacks caused by code, which failed to meet these baseline security practices, a means to pursue damages against repository and hub owners. Damages should be capped at $100,000 per instance and covered entities should include, at minimum, GitHub, Bitbucket, GitLab, and SourceForge, as well as those organizations legally responsible for maintaining container registries and associated hubs, including Docker, OpenShift, Rancher, and Kubernetes.The recommendation links to two posts:
- A report by the Paul Weiss law firm entitled The Cyberspace Solarium Commission’s Final Report and Recommendations Could Have Implications for Business:
Charged with developing a comprehensive and strategic approach to defending the United States in cyberspace, the Commission is co-chaired by Sen. Angus King (I-Maine) and Rep. Mike Gallagher (R-Wisconsin) and has 14 commissioners, including four U.S. legislators, four senior executive agency leaders, and six nationally recognized experts from outside of government.and the relevant recommendation in their March 11, 2020 final report is:
The recommended legislation would hold final goods assemblers (“FGAs”) of software, hardware, and firmware liable for damages from incidents that exploit vulnerabilities that were known at the time of shipment or discovered and not fixed within a reasonable amount of time. An FGA is any entity that enters into an end user license agreement with the user of the product or service and is most responsible for the placement of a product or service into the stream of commerce. The legislation would direct the Federal Trade Commission (“FTC”) to promulgate regulations subjecting FGAs to transparency requirements, such as disclosing known, unpatched vulnerabilities in a good or service at the time of sale.
- Trey Herr's Software Liability Is Just a Starting Point:
To make meaningful change in the software ecosystem, a liability regime must also:
- Apply to the whole software industry, including cloud service providers and operational technology firms such as manufacturing and automotive companies. These firms are important links in the software supply chain.
- Produce a clear standard for the “duty of care” that assemblers must exercise—the security practices and policies that software providers must adopt to produce code with few, and quickly patched, defects.
- Connect directly to incentives for organizations to apply patches in a timely fashion.
What Exactly Are The Proposals?Because they aren't the same, lets distinguish between these three proposals, the Atlantic Council (AC), the Cyberspace Solarium (CS) Commission, and Trey Herr (TH).
- Who is liable? Major distributors of open source software.
- What are they liable for? Enforcing good security practices on the open source developers using their services.
- What can they do to avoid liability? Not clear, because it isn't clear what enforcement mechanisms repositories can employ against their developers.
- Who imposes the liability? Victims of negligence by open source developers not the repository upon whom the liability is imposed, or more likely their class action lawyers. But the $100K per incident limit would discourage class actions.
- Who is liable? Vendors of products including software requiring users to agree to an "end user license".
- What are they liable for? "damages from incidents that exploit vulnerabilities that were known at the time of shipment or discovered and not fixed within a reasonable amount of time".
- What can they do to avoid liability? Not clear, but including disclosing known vulnerabities at the "time of sale".
- Who imposes the liability? Victims of covered incidents, or more likely their class action lawyers. Note the lack of any limit to liability.
Note the distinction between "time of shipment" and "time of sale". Many FGAs of physical products including software are in China, and the time between them shipping the product and it percolating through the retail distribution chain may be many months. The FGA has no way to identify the eventual purchaser to notify them of vulnerabilities discovered in that time. Imported cars contain much software but also have months between shipment and sale, although in this case the dealer network allows for the customer to be identified.
- Who is liable? The entire software industry, including those using software such as cloud providers and manufacturers of products including software.
- What are they liable for? Not clear.
- What can they do to avoid liability? FGAs must observe a "duty of care". How non-FGA participants in the supply chain avoid liability isn't clear.
- Who imposes the liability? Presumably, the class action lawyers for customers who believe that the "duty of care" was not observed.
Specifying The ProblemThe only vaguely specified goal of these proposals is, presumably, to reduce the average number of vulnerabilities per installed system. This depends upon a number of factors. Decreasing any of these will theoretically move the world closer to the goal:
- The average rate of newly created vulnerabilities times the average number of systems to which they are deployed.
- The average time between creating and detection of newly created vulnerabilities.
- The average time between detection and development of a patch.
- The average time between development of a patch and its deployment to a vulnerable system.
- Better practices should decrease the average rate of newly created vulnerabilities.
- Liability for software providers will tend to increase the average time between creation and detection. Because providers aren't liable for vulnerablities they don't know about, they will be motivated to prevent security researchers inspecting their systems, perhaps via the Computer Fraud and Abuse Act. It will decrease the proportion of the software base that is open source, and thus open to inspection.
- Better practices will probably increase the average time for patches to be developed, as they will impose extra overhead on the development process.
- Liability for software vendors, as opposed to users, is ikely to have no effect on the average rate of patching, since once a patch is available the vendor is off the hook.
The greatest leverage would be somehow to increase the rate at which patches are installed in vulnerable systems. Slow patching has been responsible for many of the worst data breaches, including the massive Equifax disaster:
Although a patch for the code-execution flaw was available during the first week of March, Equifax administrators didn't apply it until July 29,My post Not Whether But When has a sampling of similar incidents, to reinforce this point:
You may be thinking Equifax is unusually incompetent. But this is what CEO Smith got right. It isn't possible for an organization to restrict security-relevant operations to security gurus who never make mistakes; there aren't enough security gurus to go around, and even security gurus make mistakes.However, it must be noted that an effort to increase the rate of patching is a double-edged sword. Organizations need to test patches before deploying them because a patch in one sub-system can impact the functions of other sub-systems, including security-related functions. Rushing the process, especially the security-related testing, will lead security gurus to make mistakes.
Even the most enthusiastic proponents of imposing liability would admit that doing so would only reduce, not eliminate, the incidence of newly created vulnerabilities. So, as I argued in Not Whether But When, we must plan for a continuing flow of vulnerabilities. Worse, a really important 2010 paper by Sandy Clarke, Matt Blaze, Stefan Frei and Jonathan Smith entitled Familiarity Breeds Contempt: The Honeymoon Effect and the Role of Legacy Code in Zero-Day Vulnerabilities showed that the rate of discovery of vulnerabilities in a code base increases with time. As I wrote in Familiarity Breeds Contempt:
Clarke et al analyze databases of vulnerabilities to show that the factors influencing the rate of discovery of vulnerabilities are quite different from those influencing the rate of discovery of bugs. They summarize their findings thus:
We show that the length of the period after the release of a software product (or version) and before the discovery of the first vulnerability (the ’Honeymoon’ period) is primarily a function of familiarity with the system. In addition, we demonstrate that legacy code resulting from code re-use is a major contributor to both the rate of vulnerability discovery and the numbers of vulnerabilities found; this has significant implications for software engineering principles and practice.
The Internet of ThingsIn 2018's The Island of Misfit Toys I critiqued Johnathan Zittrain's New York Times op-ed entitled From Westworld to Best World for the Internet of Things which, among other things, proposed a "network safety bond" to be cashed in if the vendor abandoned maintenance for a product, or folded entirely. Insurers can price bonds according to companies’ security practices. There’s an example of such a system for coal mining, to provide for reclamation and cleanup should the mining company leave behind a wasteland:
The whole point of the "Internet of Things" is that most of the Things in the Internet are cheap enough that there are lots of them. So either the IoT isn't going to happen, or it is going to be rife with unfixed vulnerabilities. The economic pressure for it to happen is immense.
The picture shows the problem with these, and every proposal to impose regulation on the Things in the Internet. It is a screengrab from Amazon showing today's cheapest home router, a TRENDnet TEW-731BR for $15.99:
$15.99 home router
Anyone who has read Bunnie Huang's book The Hardware Hacker will understand that TRENDnet operates in the "gongkai" ecosystem; it assembles the router from parts including the software, the chips and their drivers from other companies. Given that after assembly, shipping and Amazon's margin, TRENDnet's suppliers probably receive only a few dollars, the idea that they could be found, let alone bonded, is implausible. If they were, the price would need to be increased significantly. So on Amazon's low-to-high price display the un-bonded routers would show up on the first page and the bonded ones would never be seen.
- There's no room in a $15.99 home router's bill of materials for a "network safety bond" that bears any relation to the cost of reclamation after TRENDnet abandons it.
Karl Bode's House Passes Bill To Address The Internet Of Broken Things reports on an effort in Congress to address the problem:
the House this week finally passed the Internet of Things Cybersecurity Improvement Act, which should finally bring some meaningful privacy and security standards to the internet of things (IOT). Cory Gardner, Mark Warner, and other lawmakers note the bill creates some baseline standards for security and privacy that must be consistently updated (what a novel idea), while prohibiting government agencies from using gear that doesn't pass muster. It also includes some transparency requirements mandating that any vulnerabilities in IOT hardware are disseminated among agencies and the public quickly:Setting standards enforced by Federal purchasing power is a positive approach. But the bill seems unlikely to pass the Senate. Even if we could wave a magic wand and force all IoT vendors to conform to these standards and support their future products with prompt patches for the whole of their working life, it wouldn't address the problem that the IoT is already populated with huge numbers of un-patched things like the $250 coffee maker whose firmware can be replaced by a war-driver over WiFi. Worse, many of them are home WiFi routers, with their hands on all the home's traffic. Even worse, lots of them are appliances such as refrigerators, whose working lives are 10-20 years. How much 20-year-old hardware do you know of that is still supported?
"Securing the Internet of Things is a key vulnerability Congress must address. While IoT devices improve and enhance nearly every aspect of our society, economy and everyday lives, these devices must be secure in order to protect Americans’ personal data. The IoT Cybersecurity Improvement Act would ensure that taxpayers dollars are only being used to purchase IoT devices that meet basic, minimum security requirements. This would ensure that we adequately mitigate vulnerabilities these devices might create on federal networks."
App StoresMobile phone operators have somewhat more control over the devices that connect to their networks, and to defend them need the devices to be less insecure. So they introduced the "walled gardens" called App Stores. The idea, as with the idea of imposing liability on software providers, was that apps in the store would have been carefully vetted, and that insecure apps would have been excluded. Fundamentally, this is the same idea as "content moderation" on platforms such as Facebook and Twitter. That is, the idea that humans can review content and classify it as acceptable or unacceptable.
Recent experience with moderation of misinformation on social media platforms bears out Mike Masnick's Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well. His third point is:
people truly underestimate the impact that "scale" has on this equation. Getting 99.9% of content moderation decisions at an "acceptable" level probably works fine for situations when you're dealing with 1,000 moderation decisions per day, but large platforms are dealing with way more than that. If you assume that there are 1 million decisions made every day, even with 99.9% "accuracy" (and, remember, there's no such thing, given the points above), you're still going to "miss" 1,000 calls. But 1 million is nothing. On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. And that's just photos. If there's a 99.9% accuracy rate, it's still going to make "mistakes" on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on.The vetting problem facing app stores is very similar, in that the combination of scale and human fallibility leads to too many errors. The catalog of malware detected in the Apple and Android app stores testifies to this. So, just like the IoT, the app store ecosystem will be rife with vulnerabilities.
Code RepositoriesThe idea that code repositories are liable for the software they host has the same vetting problem as app stores, except worse. App stores are funded by taking 30% of the gross. Code repositories have no such income stream, and no way of charging the users who download code from them. It is open source, so the repository has no way to impose a paywall to fund the vetting process. Nor can they charge contributors. If they did contributors would switch to a free competitor. So, just like the IoT and the app store ecosystem, code repositories will be rife with vulnerabilities.
Downsides of LiabilityIt looks as though imposing liability on software vendors wouldn't be effective. But it is likely to be worse than that:
- Without the explicit disclaimer of liability in open source licenses the unpaid individual contributors would be foolish to take part in the ecosystem. Only contributors backed by major corporations and their lawyers would be left, and while they are important they aren't enough to support a viable ecosystem. Killing off open source, at least in the US, would not improve US national security or the US economy.
- The proposals to impose liability assume that the enforcement mechanism would be class action lawsuits. Class action lawyers take a large slice of any penalities in software cases, leaving peanuts for the victims. In half a century of using software, I have never received a penny from a software-related class action settlement. I believe there were a couple in which, after filing difficult-to-retrieve documentation, I might have been lucky enough to be rewarded with a couplon for a few dollars. Not enough to be worth the trouble. Further, in order that the large slice be enough, class action lawyers only target deep pockets. The deep pockets in the software business are not, in general, the major source of the problems liability is supposed to address.
- Software is a global business, one which is currently dominated by US multinational companies. They are multinational because the skilled workforce they need is global. If the US imposes harsher conditions on the software business than competing countries, the business will migrate overseas. While physical goods can be controlled by Customs at the limited ports of entry, there are no such choke points on the Internet to prevent US customers acquiring foreign software. Driving the software industry overseas would not improve US national security or the US economy.
How To Fix The ProblemIf imposing liability on software providers is not likely to be either effective or advantageous, what could we do instead? Lets start by stipulating that products with embedded software that lack the physical means to connect to the Internet cannot pose a threat to other devices or their user's security via the Internet so can be excluded because they are appropriately covered by current product liability laws.
Thus the liability regime we are discussing is for software in devices connected to the Internet. In the good old days of The Phone Company, connection to the network was regulated; only approved devices could connect. This walled garden was breached by the Carterphone decision, which sparked an outburst of innovation in telephony and greatly benefited both consumers and the economy. It is clear that the Internet could not have existed without the freedom to connect provided by the Carterphone decision. Even were legislation enforcing a "permission to connect" regime for the Internet passed in some countries, it would be impossible to enforce. The Internet Protocols specify how to interconnect diverse networks. They permit situations such as the typical home network, in which a multitude of disparate devices are connected to the Internet via a gateway router performing Network Address Translation and thus rendering the devices invisible to the ISP.
We need to increase the speed of patching vulnerabilities in devices connected to the Internet. We cannot issue devices "passports" permitting them to connect. We cannot impose liability on individual users who do not patch promptly for the same reason that copyright trolls often fail. All we have is an IP address, which is not an identification of the responsible individual.
The alternative is to encourage vendors to support automatic updates. But this also is a double-edged sword. If everything goes right, it ensures that devices are patched soon after the patch becomes available. But if not, it ensures that a supply chain attack compromises the entire set of vulnerable devices at once. As discussed in Securing The Software Supply Chain, current techniques for securing automatic updates are vulnerable, for example to compromise of the vendor's signing key, or malfeasance of the certificate authority. Use of Certificate Transparency can avert these threats, but as far as I know no software vendor is yet using it to secure their updates.
A "UL-style" label certifying that the device would be automatically updated with cryptographically secured patches for at least 5 years would be a simple, easily-understood product feature encouraging adoption. It would have two advantages, making clear that:
- the software provider assumed responsibility for providing updates to fix known problems, and protecting the certificates that secured the updates.
- the customer who disabled the automatic patch mechanism assumed responsibility for promptly installing available patches.
ConclusionThe idea of imposing liability on software providers is seductive but misplaced. It would likely be both ineffective and destructive. Ineffective in that it assumes a business model that does not apply to the vast majority of devices connected to the Internet. Destructive in that it would be a massive disincentive to open source contributors, exacerbating the problem I discussed in Open Source Saturation. The focus should primarily be on ensuring that available patches are applied promptly. Automating the installation of patches has risks, but they seem worth accepting. Labeling products that provide a 5-year automated patch system would both motivate adoption, and clarify responsibilities.
Liability up the chain might increase Mean Time Between Failures, but it is Mean Time To Repair that is the real problem. Fixing that with liability in the current state of the chain is setting users up to fail.
Dan Goodin's “Joker”—the malware that signs you up for pricey services—floods Android markets explains why app stores such as Google Play can't be depended upon to eliminate malware:
"researchers from security firm Zscaler said they found a new batch comprising 17 Joker-tainted apps with 120,000 downloads. The apps were uploaded to Play gradually over the course of September.
The apps are knockoffs of legitimate apps and, when downloaded from Play or a different market, contain no malicious code other than a “dropper.” After a delay of hours or even days, the dropper, which is heavily obfuscated and contains just a few lines of code, downloads a malicious component and drops it into the app."
Catalin Cimpanu reports that Play Store identified as main distribution vector for most Android malware:
"The official Google Play Store has been identified as the primary source of malware installs on Android devices in a recent academic study — considered the largest one of its kind carried out to date.
Using telemetry data provided by NortonLifeLock (formerly Symantec), researchers analyzed the origin of app installations on more than 12 million Android devices for a four-month period between June and September 2019.
In total, researchers looked at more than 34 million APK (Android application) installs for 7.9 million unique apps.
Researchers said that depending on different classifications of Android malware, between 10% and 24% of the apps they analyzed could be described as malicious or unwanted applications."
The paper is How Did That Get In My Phone? Unwanted App Distribution on Android Devices by Platon Kotzias, Juan Caballero† & Leyla Bilge.
Post a Comment