Tuesday, March 7, 2023

On Trusting Trustlessness

Nearly five years ago some bad guys used "administrative backdoors" in a "smart contract" to steal $23.5M from Bancor. In response I wrote DINO and IINO pointing out a fundamental problem with "smart contracts" built on blockchains. The technology was sold as "trustless":
A major misconception about blockchains is that they provide a basis of trust. A better perspective is that blockchains eliminate the need for trust.
But the "smart contracts" could either be:
  • immutable, implying that you are trusting the developers to write perfect code, which frequently turns out to be a mistake,
  • or upgradable, implying that you are trusting those with the keys to the contract, which frequently turns out to be a mistake.
The "smart contract" either is or is not mutable after deployment, there is no third possibility. Both cases require trust.

Now, in response to some good guys using an "unknown vulnerability" in a smart contract to recover $140M in coins looted in the Wormhole exploit, Molly White wrote The Oasis "counter-hack" and the centralization of defi on the same topic. Below the fold, I comment on her much better, much more detailed discussion of the implications of "smart contracts" that can be upgraded arbitrarily changed by their owners.

The basic problem here is that an "upgradable smart contract" isn't worth the bits it is printed on, because the terms of the "contract" can be changed after the parties agreed to it by the owner of the "contract", or by the owners of any "upgradable smart contracts" used by the "contract" in question, who may or may not be one of the parties.

Level K's explainer, Flexible Upgradability for Smart Contracts, has a list of the disadvantages (great except for the recursion):
  • The upgrade “owner” has full control, meaning full trust. In order to design a truly trustless contract that is also upgradable, the “owner” must itself be a trustless contract.
  • Syntax for interacting with key/value storage is more verbose than standard solidity state variable operations.
  • A flaw in a standardized and shared contract could lead to widespread damage across all dapps that consume the contract.
You'll need to read White's post for the details of how Jump Crypto used a court order to force Oasis:
“to take all necessary steps that would result in the retrieval of certain assets involved with the wallet address associated with the Wormhole Exploit“
The "necessary steps" involved updating an updateable multisig "smart contract":
the long and short of it is that a new wallet — almost certainly controlled by Jump — was added as a signer to the Oasis multisig. After that point, they upgraded the automation smart contract (enabled by the Wormhole exploiter for stop-loss protection) to a new proxy that allowed them to effectively reassign control of the vault to themselves.
A multisig is a group of N keys, M ≤ N of which must cooperate to create a valid signature. The idea of assigning control of an upgradable "smart contract" to a multisig is that the group is more trustworthy than an individual. This, of course, depennds upon who the group members are, as White points out:
If Oasis wanted to upgrade a contract, four of its twelve multisig members needed to approve the decision. Sometimes multisig contracts are controlled by people from disparate organizations with relatively independent interests. Sometimes they’re all controlled by employees of a single entity. Sometimes multisig members are anonymous — leaving them ostensibly less vulnerable to attack or coercion, but also less subject to scrutiny.

Because Oasis is controlled by a company, Oazo, the key holders are likely just a group of Oazo employees, and the multisig is more of a safeguard against, say, a rogue employee than any sort of attempt at real decentralization of power.
Source
White explains why upgradability has become pervasive:
This has become extremely commonplace in crypto, where project teams have realized that always writing perfectly bug-free code is a pipe dream, and that being able to patch that code is sometimes quite desirable. Upgradability is also used for other purposes besides bugfixing, including adding functionality without requiring users to migrate to a completely new contract.

Upgradeable smart contracts are slightly controversial in crypto, though not as much as I would expect. In fact, they seem to be becoming accepted as the norm. They solve real problems — fixing bugs is good! Adding new features is good! It’s annoying and expensive to migrate between contracts!
And why they are "slightly controversial":
crypto is big on “trustlessness” — that is, the idea that you shouldn’t have to trust any person or organization, you can simply trust the code. The thinking goes: if you can audit the smart contract code and if that code can never be changed, you don’t have to trust the people who wrote it.

With upgradable smart contracts, trustlessness is no longer a given. Anytime someone uses a project with an upgradable smart contract, or a project that depends on some other project with an upgradable smart contract (and so on up the dependency graph), they have to decide either that they trust whoever is in charge of deciding when and how to change the code, or that changes to the upgradable code couldn’t negatively impact them.
White is focused on upgradability, but I think she should have noted that the idea behind immutability that:
if you can audit the smart contract code and if that code can never be changed, you don’t have to trust the people who wrote it
sounds great but is impractical for at least four reasons:
  • It may or may not be the case that the "smart contract" has source code available on Github or elsewhere. Even if it does, that isn't the thing that is immutably stored on a public blockchain. Instead that is the byte code allegedly resulting from compiling the source code. To be meaningful, audits must de-compile the byte code from the blockchain and audit the result.
  • Very few users of "smart contracts" do or even could de-compile the byte code and audit it. They depend on assertions by the "smart contract" owner that it has been audited by some expert. These assertions are frequently false or exaggerated, because the contract owners aren't highly motivated to be honest.
  • Code audits by experts definitely reduce the incidence of vulnerabilities, but they don't eliminate them. Remember that The DAO, the first major "smart contract", was written by Gavin Wood, the expert creator of Solidity, the programming language in which it was written. He wrote a bug that destroyed $150M in notional value.
  • DataFinnovation's The Compliance-Innovation Trade-off uses fundamental computer science results to show that:
    the properties of the finance system cannot be checked automatically. The code can be reviewed, but if it calls external functions nothing can be proven.
    Thus there is no alternative to error-prone human expert audits. I couldn't find on their website any indication that the Oasis code had been the subject of a third-party audit.
White is more polite than I was in making the same point as I made in Sybil Defense, that there are huge costs to decentralization:
In order to create a trustless, censorship-resistant system, blockchain developers had to make a lot of tradeoffs. Broadly speaking, blockchains are slow. They don’t scale well. They’re expensive. There is no “undo” button. Blockchains are not this way because they are just poorly coded or because their developers don’t wish for them to be fast and inexpensive — there are certainly plenty of excellent developers working on blockchains. Blockchains are this way because they have to be in order to try to achieve that trustless, censorship-resistant ideological goal.
But, just as in the real world the techniques of decentralization are self-defeating, so the same techniques aimed at trustlessness turn out to be self-defeating:
But now we’ve just seen a very clear illustration of how multisig-controlled upgradable contracts, which have proliferated throughout crypto, undermine trustlessness and censorship-resistance. If a crypto project has accepted the huge costs that come with building on a blockchain as worthwhile tradeoffs to achieve trustlessness and censorship-resistance, but then undermines its own trustlessness and censorship-resistance by using multisig-controlled upgradable contracts, well, what was even the point?
I answered White's question in Economic Incentives:
Systems at least 1,000 times slower and 10,000 times more expensive than potential competitors that nevertheless succeed imply that they provide a return on their additional investment sufficiently greater than the potential competitor can. The permissioned system would accept the same inputs and generate the same outputs as the permissionless system, so the return cannot come from the performance of the system. The alternative explanation is that the permissioned system would be subject to regulation, preventing it from performing hugely profitable illegal transactions.
Or as David Gerard puts it, "The secret ingredient is still crime".

White makes an important point about the centralization of power in cryptocurrencies:
In crypto, it’s sort of accepted that people take on the risk of irreversible hacks and theft in the pursuit of trustlessness and censorship-resistance. But now we see that when it comes to projects like Oasis, most users are up a creek if their assets are stolen, but entities that are wealthy and powerful enough to coerce the multisig (in this case via a court), play under a different set of rules entirely.
States, those like Jump Crypto with the resources to persuade states, and criminals have ways to get contract owners to cooperate:
xkcd 538
When push comes to shove, and multisig members actually find themselves facing life-changing financial consequences, threats to their freedom, punishment that could have knock-on effects on their families or other loved ones, or even more serious threats, will they still feel that way? Or will a sufficient number of them just agree to comply? Oasis didn’t even push back on the court order, from the looks of it, much less defy it.

In this case, we’ve seen a state actor (the High Court of England and Wales) apply pressure to a multisig to reverse a fairly clear-cut theft. But there are certainly more scenarios in which state powers could apply pressure to projects to take far more controversial actions. Furthermore, the classic “wrench attack” is just as applicable to multisig members, and there are plenty of non-state powers, or more authoritarian states, who might seek to pressure multisig members in ways that go far beyond court orders.
White debunks another unrealistic argument:
The argument is often made that, sure, there are small groups that hold massive sway in these projects, but if they were to release a change that completely contradicted the crypto ethos, people could just not use the updated version, or they could fork. This, however, naively interprets these decisions as happening in a vacuum. In reality, large financial players using these projects also have major sway in what version of a project is broadly adopted.
We saw this when the big mining pools vetoed Bitcoin's block expansion, and White recounts the same effect in Ethereum's Merge:
when major stablecoin issuers like USDC and Tether announced support for the Ethereum proof-of-stake chain (and only that chain), people stopped wondering. No one really wanted to use the chain where stablecoins were worthless, particularly the projects that heavily rely on them, and people wanted to go where their assets had value and where their favorite projects were. Today, ETHPoW is, predictably, a ghost town.
People should have taken the lesson from the Bitcoin Cash fork in 2017 — BTC's "market cap" is now more than 180 times BCH's.

White concludes:
As it turns out, true trustlessness, decentralization, and censorship-resistance is hard. Many so-called defi projects sacrifice these ideals to varying degrees in exchange for ease of development and other benefits. However, a lot of people simply aren’t aware that these tradeoffs are being made, and are not cognizant of their resultant risk exposure — particularly when it comes in the form of counterparty risk that is a degree or two removed.
Obviously, those who "simply aren't aware" are not happy that their lack of awareness is being exposed:
While many celebrated the recovery, some were concerned about the precedent of a so-called defi platform changing a smart contract to remove funds from a wallet at the direction of a court. Some described the upgradability as a "backdoor". "If they'd do it for Jump, what does that say about possible coercion via state actors?" wrote one trader on Twitter.
White doesn't discuss a set of related problems I covered in Responsible Disclosure Policies that increase as the size and independence of the multisig members increases. If a "smart contract" needs to be upgraded to patch a bug or vulnerability, or to recover stolen funds, the multisig members need to (a) be told about it, and (b) be given time to vote, during which time anyone who knows about the reason can exploit it, so (c) keep it secret. Benjamin Franklin wrote “Three may keep a secret, if two of them are dead.” This was illustrated by the $162M Compound fiasco:
"There are a few proposals to fix the bug, but Compound’s governance model is such that any changes to the protocol require a multiday voting window, and Gupta said it takes another week for the successful proposal to be executed."
Compound built a system where, if an exploit was ever discovered, the bad guys would have ~10 days to work with before it could be fixed. This issue is all the more important in an era of flash loan attacks when exploits can be instantaneous.

Further reading on this topic from Matt Levine and Emily Nicolle.

5 comments:

  1. Molly White spots yet another instance of a supposedly "DeFi" protocol being turned off by the key holders in Hedera Network halts access after exploit:

    "The Hedera network turned off access to the Hedera mainnet on March 9 after observing "smart contract irregularities". They subsequently confirmed that the Hedera smart contract service had been attacked by exploiters who were able to transfer individual users' tokens to their own accounts.
    ...
    Some balked at Hedera's ability to simply turn off user access to the network, despite claiming to be a decentralized project."

    ReplyDelete
  2. This was slick. Molly WHite reports that PeopleDAO loses $120,000 after payment spreadsheet is shared publicly:

    "When the accounting lead for PeopleDAO accidentally shared an editable accounting spreadsheet link in a public Discord channel, an enterprising member of the Discord decided to take advantage. They inserted a row with their own wallet address for a 76 ETH (~$120,000) payment, then hid the row so it wouldn't display to the other viewers.

    When team leads reviewed the spreadsheet to sign off on the payments, they didn't see the row, and there was no rollup showing total payments or anything else that would've helped them catch the malicious activity. The transactions were uploaded to a tool allowing asset transfers via CSV, and the required six out of nine multisig members approved the transaction."

    ReplyDelete
  3. As regards the efficacy of code audits, Molly White reports that Euler Finance exploited for almost $200 million:

    "The attacker stole $8.7 million in the Dai stablecoin, $18.5 million in wrapped Bitcoin, $135.8 million in Lido staked Ethereum (stETH), and $33.8 million in the USDC stablecoin. Although Euler was well known for its many code audits, the project had later added a vulnerable function that had not been as heavily audited."

    Thanks, Euler Finance, for the timely example!

    ReplyDelete
  4. Of course, the security of your DeFi protocol depends upon the security of all the other DeFi protocols with which yours interoperates, as Molly White reports in Over $35 million lost as contagion from Euler hack spreads throughout defi:

    "Contagion from the massive exploit of the Euler project has spread to around a dozen defi projects, including Balancer, Angle Protocol, Yearn Finance, InverseFinance, and others. Some are still evaluating if and how they may be affected, and how much they've lost.
    Around $11.9 million of tokens were sent from the Balancer defi liqiuidity project to Euler during the attack, prompting Balancer to pause the project.

    The Angle Protocol decentralized stablecoin project also disclosed that almost half of the total value locked in the project — around $17.6 million in the USDC stablecoin — were sent to Euler during the hack."

    ReplyDelete
  5. Molly White provides an example of the effectiveness of "smart contract" audits in Exactly Protocol hacked for at least $12 million:

    "Exactly writes on their website that they had been audited by four different firms: Chainsafe, Coinspect, ABDK, and Cryptecon."

    ReplyDelete