Thursday, September 29, 2022

Responsible Disclosure Policies

Recently, Uber was completely pwned, apparently by an 18-year-old. Simon Sharwood's Uber reels from 'security incident' in which cloud systems seemingly hijacked provides some initial details:
Judging from screenshots leaked onto Twitter, though, an intruder has compromised Uber's AWS cloud account and its resources at the administrative level; gained admin control over the corporate Slack workspace as well as its Google G Suite account that has over 1PB of storage in use; has control over Uber's VMware vSphere deployment and virtual machines; access to internal finance data, such as corporate expenses; and more.
And in particular:
Even the US giant's HackerOne bug bounty account was seemingly compromised, and we note is now closed.

According to the malware librarians at VX Underground, the intruder was using the hijacked H1 account to post updates on bounty submissions to brag about the degree of their pwnage, claiming they have all kinds of superuser access within the ride-hailing app biz.

It also means the intruder has access to, and is said to have downloaded, Uber's security vulnerability reports.
Thus one of the results of the incident is the "irresponsible disclosure" of the set of vulnerabilities Uber knows about and, presumably, would eventually have fixed. "Responsible disclousure" policies have made significant improvements to overall cybersecurity in recent years but developing and deploying fixes takes time. For responsible disclosure to be effective the vulnerabilities must be kept secret while this happens.

Stewart Baker points out in Rethinking Responsible Disclosure for Cryptocurrency Security that these policies are hard to apply to cryptocurrency systems. Below the fold I discuss the details.

Baker summarizes "responsible disclosure":
There was a time when software producers treated independent security research as immoral and maybe illegal. But those days are mostly gone, thanks to rough agreement between the producers and the researchers on the rules of “responsible disclosure.” Under those rules, researchers disclose the bugs they find “responsibly”—that is, only to the company, and in time for it to quietly develop a patch before black hat hackers find and exploit the flaw. Responsible disclosure and patching greatly improves the security of computer systems, which is why most software companies now offer large “bounties” to researchers who find and report security flaws in their products.

That hasn’t exactly brought about a golden age of cybersecurity, but we’d be in much worse shape without the continuous improvements made possible by responsible disclosure.
Baker identifies two fundamental problems for cryptocurrencies:
First, many customers don’t have an ongoing relationship with the hardware and software providers that protect their funds—nor do they have an incentive to update security on a regular basis. Turning to a new security provider or using updated software creates risks; leaving everything the way it was feels safer. So users won’t be rushing to pay for and install new security patches.
Users have also been deluged with accounts of phishing and other scams involving updating or installing software, so are justifiably skeptical of "patch now" messages. In fact, most users don't even try to use cryptocurrency directly, but depend on exchanges. Thus their security depends upon that of their exchange. Exchanges have a long history of miserable security, stretching back eight years to Mt. Gox and beyond. The brave souls who do use cryptocurrency directly depend on the security of their wallet software, which again has a long history of vulnerabilities.

Next, Baker points to the ideology of decentralization as a problem:
That means that the company responsible for hardware or software security may have no way to identify who used its product, or to get the patch to those users. It also means that many wallets with security flaws will be publicly accessible, protected only by an elaborate password. Once word of the flaw leaks, the password can be reverse engineered by anyone, and the legitimate owners are likely to find themselves in a race to move their assets before the thieves do.
Molly White documents a recent example of both problems in Vulnerability discovered in vanity wallet generator puts millions of dollars at risk:
The 1inch Network disclosed a vulnerability that some of their contributors had found in Profanity, a tool used to create "vanity" wallet addresses by Ethereum users. Although most wallet addresses are fairly random-looking, some people use vanity address generators to land on a wallet address like 0xdeadbeef52aa79d383fd61266eaa68609b39038e (beginning with deadbeef), ... However, because of the way the Profanity tool generated addresses, researchers discovered that it was fairly easy to reverse the brute force method used to find the keys, allowing hackers to discover the private key for a wallet created with this method.

Attackers have already been exploiting the vulnerability, with one emptying $3.3 million from various vanity addresses. 1inch wrote in their blog post that "It’s not a simple task, but at this point it looks like tens of millions of dollars in cryptocurrency could be stolen, if not hundreds of millions."

The maintainer of the Profanity tool removed the code from Github as a result of the vulnerability. Someone had raised a concern about the potential for such an exploit in January, but it had gone unaddressed as the tool was not being actively maintained.
It is actually remarkable that it took seven months from the revelation of the potential vulnerability to its exploitation. And the exploits continue, as White reports in Wintermute hacked for $160 million:
Wintermute hasn't disclosed more about the attack, but it's possible that the hacker may have exploited the vulnerability in the vanity wallet address generator Profanity, which was disclosed five days prior. The crypto asset vault admin had a wallet address prefixed with 0x0000000, a vanity address that would have been susceptible to attack if it was created using the Profanity tool.
But everything is fine because the CEO says the company is "solvent with twice over that amount in equity left". Apparently losing one-third of your equity to a thief is no big deal in the cryptosphere.

Baker describes rapid exploitation of such vulnerabilities as "nearly guaranteed" because of the immediate financial reward, and provides two more examples from last month:
In one, hackers took nearly $200 million from Nomad, a blockchain “bridge” for converting and transferring cryptocurrencies. One user began exploiting a flaw in Nomad’s smart contract code. That tipped others to the exploit. Soon, a feeding frenzy broke out, quickly draining the bridge of all available funds. In the other incident, Solana, a cryptocurrency platform, saw hackers drain several million dollars from nearly 8,000 wallets, probably by compromising the security of their seed phrases, thus gaining control of the wallets.
Baker summarizes:
Together, these problems make responsible disclosure largely unworkable. It’s rarely possible to fill a security hole quietly. Rather, any patch is likely to be reverse engineered when it’s released and exploited in a frenzy of looting before it can be widely deployed. (This is not a new observation; the problem was pointed out in a 2020 ACM article that deserves more attention.)

If I’m right, this is a fundamental flaw in cryptocurrency security. It means that hacks and mass theft will be endemic, no matter how hard the industry works on security, because the responsible disclosure model for curing new security flaws simply won’t work.
Böhme et al Fig. 2
The 2020 paper Baker cites is Responsible Vulnerability Disclosure in Cryptocurrencies by Rainer Böhme, Lisa Eckey, Tyler Moore, Neha Narula, Tim Ruffing and Aviv Zohar. The authors describe the prevalence of vulnerabilities thus:
The cryptocurrency realm itself is a virtual "wild west," giving rise to myriad protocols each facing a high risk of bugs. Projects rely on complex distributed systems with deep cryptographic tools, often adopting protocols from the research frontier that have not been widely vetted. They are developed by individuals with varying level of competence (from enthusiastic amateurs to credentialed experts), some of whom have not developed or managed production-quality software before. Fierce competition between projects and companies in this area spurs rapid development, which often pushes developers to skip important steps necessary to secure their codebase. Applications are complex as they require the interaction between multiple software components (for example, wallets, exchanges, mining pools). The high prevalence of bugs is exacerbated by them being so readily monetizable. With market capitalizations often measured in the billions of dollars, exploits that steal coins are simultaneously lucrative to cybercriminals and damaging to users and other stakeholders. Another dimension of importance in cryptocurrencies is the privacy of users, whose transaction data is potentially viewable on shared ledgers in the blockchain systems on which they transact. Some cryptocurrencies employ advanced cryptographic techniques to protect user privacy, but their added complexity often introduces new flaws that threaten such protections.
Böhme et al describe two fundamental differences between the disclosure and patching process in normal software and cryptocurrencies. First, coordination:
the decentralized nature of cryptocurrencies, which must continuously reach system-wide consensus on a single history of valid transactions, demands coordination among a large majority of the ecosystem. While an individual can unilaterally decide whether and how to apply patches to her client software, the safe activation of a patch that changes the rules for validating transactions requires the participation of a large majority of system clients. Absent coordination, users who apply patches risk having their transactions ignored by the unpatched majority.

Consequently, design decisions such as which protocol to implement or how to fix a vulnerability must get support from most stakeholders to take effect. Yet no developer or maintainer naturally holds the role of coordinating bug fixing, let alone commands the authority to roll out updates against the will of other participants. Instead, loosely defined groups of maintainers usually assume this role informally.

This coordination challenge is aggravated by the fact that unlike "creative" competition often observed in the open source community (for example, Emacs versus vi), competition between cryptocurrency projects is often hostile. Presumably, this can be explained by the direct and measurable connection to the supporters' financial wealth and the often minor technical differences between coins. The latter is a result of widespread code reuse, which puts disclosers into the delicate position of deciding which among many competing projects to inform responsibly. Due to the lack of formally defined roles and responsibilities, it is moreover often difficult to identify who to notify within each project. Furthermore, even once a disclosure is made, one cannot assume the receiving side will act responsibly: information about vulnerabilities has reportedly been used to attack competing projects, influence investors, and can even be used by maintainers against their own users.
The second is controversy, which:
emerges from the widespread design goal of "code is law," that is, making code the final authority over the shared system state in order to avoid (presumably fallible) human intervention. To proponents, this approach should eliminate ambiguity about intention, but it inherently assumes bug-free code. When bugs are inevitably found, fixing them (or not) almost guarantees at least someone will be unhappy with the resolution. ... Moreover, situations may arise where it is impossible to fix a bug without losing system state, possibly resulting in the loss of users' account balances and consequently their coins. For example, if a weakness is discovered that allows anybody to efficiently compute private keys from data published on the blockchain, recovery becomes a race to move to new keys because the system can no longer tell authorized users and attackers apart. This is a particularly harmful consequence of building a system on cryptography without any safety net. The safer approach, taken by most commercial applications of cryptography but rejected in cryptocurrencies, places a third party in charge of resetting credentials or suspending the use of known weak credentials.
I discussed the forthcoming ability to "efficiently compute private keys" in The $65B Prize.

Böhme et al go on to detail seven episodes in which cryptocurrencies' vulnerabilities were exploited. In some cases disclosure was public and exploitation was rapid, in other cases the developers were informed privately. A pair of vulnerabilities in Bitcoin provides an example:
a developer from Bitcoin Cash disclosed a bug to Bitcoin (and other projects) anonymously. Prior to the Bitcoin Cash schism, an efficiency optimization in the Bitcoin codebase mistakenly dropped a necessary check. There were actually two issues: a denial-of-service bug and potential money creation. It was propagated into numerous cryptocurrencies and resided there for almost two years but was never exploited in Bitcoin. ... The Bitcoin developers notified the miners controlling the majority of Bitcoin's hashrate of the denial-of-service bug first, making sure they had upgraded so that neither bug could be exploited before making the disclosure public on the bitcoin-dev mailing list. They did not notify anyone of the inflation bug until the network had been upgraded.
The authors conclude with a set of worthy recommendations for improving the response to vulnerabilities, as Baker does also. But they all depend upon the existence of trusted parties to whom the vulnerability can be disclosed, and who are in a position to respond appropriately. In a truly decentralized, trustless system such parties cannot exist. None of the recommendations address the fundamental problem which, as I see it, is this:
  • Cryptocurrencies are supposed to be decentralized and trustless.
  • Their implementations will, like all software, have vulnerabilities.
  • There will be a delay between discovery of a vulnerability and the deployment of a fix to the majority of the network nodes.
  • If, during this delay, a bad actor finds out about the vulnerability, it will be exploited.
  • Thus if the vulnerability is not to be exploited its knowledge must be restricted to trusted developers who are able to ensure upgrades without revealing their true purpose (i.e. the vulnerability). This violates the goals of trustlessness and decentralization.
This problem is particularly severe in the case of upgradeable "smart contracts" with governance tokens. In order to patch a vulnerability, the holders of governance tokens must vote. This process:
  • Requires public disclosure of the reason for the patch.
  • Cannot be instantaneous.
If cryptocurrenceies are not decentralized and trustless, what is their point? Users have simply switched from trusting visible, regulated, accountable institutions backed by the legal system, to invisible, unregulated, unaccountable parties effectively at war with the legal system. Why is this an improvement?

No comments:

Post a Comment