|The blockchain trilemma|
much of the innovation in blockchain technology has been aimed at wresting power from centralised authorities or monopolies. Unfortunately, the blockchain community’s utopian vision of a decentralised world is not without substantial costs. In recent research, we point out a ‘blockchain trilemma’ – it is impossible for any ledger to fully satisfy the three properties shown in Figure 1 simultaneously (Abadi and Brunnermeier 2018). In particular, decentralisation has three main costs: waste of resources, scalability problems, and network externality inefficiencies.Below the fold, some commentary.
BA's conclusion (reformatted):
The blockchain trilemma highlights the key economic trade-offs in designing a ledger. Traditional ledgers, managed by a single entity, forgo the desired feature of decentralisation. A centralised ledger writer is incentivised to report honestly because he does not wish to jeopardise his future profits and franchise value. Blockchains can eliminate the rents extracted by centralised intermediaries through two types of competition: free entry of writers and fork competition. Decentralisation comes at the cost of efficiency, however.In their paper Blockchain Economics Abadi and Brunnermeier write:
Furthermore, the ideal of decentralisation may be unattainable when substantial legal enforcement by the government is necessary ... for the blockchain to function properly.
- Free entry completely erodes writers' future profits and franchise values. The ledger's correctness must rely on the purely static incentives provided by proof of work.
- Fork competition facilitates competition between ledgers but can lead to instability and produce too many ledgers.
Finally, we informally make the important point that while blockchains guarantee transfers of ownership, some sort of enforcement is required to ensure transfers of possession.When banks were fraudulently transferring ownership by foreclosing on mortgages by robosigning, they had to get the assistance of sheriffs to obtain possession. The problem with MERS, the record-keeping system that enabled the frauds, was not that it was centralized, but that it was subject to "garbage in, garbage out".
The first thing to note is that, as Eric Budish has shown in The Economic Limits Of Bitcoin And The Blockchain, the high cost of adding a block to a chain is not an unfortunate side-effect, it is essential to maintaining the correctness of the chain, and limits the value of the largest transaction it is safe to allow into the chain. This supports BA's analysis that if the system is decentralized and correct it must be inefficient.
Second, BA's analysis is of Platonic ideal blockchains, for which the assumption holds that miners are abundant and independent, and decentralization is an achievable goal:
- Miners of successful blockchains in the real world, such as Bitcoin's and Ethereum's, are not independent; they collude in a few large mining pools to achieve reasonably smooth income. The two largest Bitcoin pools are apparently both controlled by Bitmain. They only have to collude with one other pool to mount a 51% attack.
- Miners of less successful blockchains may be independent but they are not abundant. The availability of mining-as-a-service means 51% attacks on these chains are becoming endemic.
Mine the Gap: Bitcoin and the Maintenance of Trustlessness by Gili Vidan and Vili Lehdonvirta (VL) examines how the belief in the trustless nature of blockchains is maintained despite the fact that they aren't decentralized or, in practice, immutable:
The division between tangible, knowable core code, considered as internal to the Bitcoin ecosystem, and the disruptive actions of actors outside of what constitutes the core allows for the network to maintain its narrative of algorithmic decentralization when facing contradictory evidence. Why do users continue to trust in code, especially in this particular code, in the face of such breakdowns?From VL's abstract:
In contrast to the discourse, we find that power is concentrated to critical sites and individuals who manage the system through ad hoc negotiations, and who users must therefore implicitly trust—a contrast we call Bitcoin’s “promissory gap.” But even in the face of such contradictions between premise and reality, the discourse is maintained. We identify four authorizing strategies used in this workThe first of the strategies is:
the collapse of users and their representations on the network into the aggregation of CPUs that power the network. This ambiguity with regards to the identity of the Bitcoin community — individual human actors or their dedicated machines — allows the network to be portrayed as a self-regulating system not susceptible to human foibles, and simultaneously as an enabler of direct action. Under the edict of ‘one-CPU-one-vote,’ any incident within the Bitcoin network is at once the expected result of running the protocol and the enforcement of an expressed consensus of its users. The Bitcoin protocol reimagines its constituency as amass of CPUs.The second is market liberalism ideology:
the assumption of rational, self-interested agents. When CPUs as stand-ins for a mass of individual users appeared to have been accumulated at the hands of a single actor such as GHash.IO, both the pool operators and the core developers issued statements to reassure users that it would not be in the pool’s rational self-interest to undermine the network or to go over the 51 percent threshold. ... Developers did not hesitate to predict what the pools would or would not do based on rational choice, even though the original developer had failed to predict the (rational) emergence of pools in the first place.The third is trust in experts(!):
The belief that cryptographic know-how should grant particular actors in the network governing power enables the simultaneous elevation of Nakamoto’s paper as the ultimate authority of keeping the network within the bounds of its intended purpose, and the acceptance of the Core Development Team as the legitimate body to carry out updates of the code. A commitment to technocratic order first enrolls users in the network through the promise of a decentralized system that ensures the need to trust no-one, and then, when the system’s unsettled and unpredictable nature becomes visible, the technocratic order privileges certain actors as legitimate holders of centralized power until the infrastructure can be stabilized again. Collapsing the difference between users and CPUs further facilitates this technocratic structure, because if the Bitcoin network is composed of machines, then who better to rule it than engineers.The final strategy is "its just a bug":
The fourth and final discursive strategy is casting problems as temporary bugs that will not be present in the final, ideal version of the code. Instead of critically reflecting on the shortcomings of the ‘trust in code’ narrative, participants are asked to ignore contradictions as limitations of a particular implementation of the code. Sites of centralized power are cast not as inherent consequences of the architecture, as features of it, but rather as temporary shortcomings to be overcome in later iterations of the code, as bugs to be patched. Bitcoin’s and blockchain’s initial appeal comes from the promise of a one-time buy-in into infallible code, which will be from that moment on fixed, knowable, and autonomous. Once issues of centralization emerge, however, the code then becomes a malleable experiment, subject to iterations and improvements to address the temporary aberration. If anything is maintained as fixed, it is the belief that trustlessness can be engineered, the belief that Nakamoto’s elegant vision is almost within reach through minor technical adjustments. Participants are asked to trust if not this version of the code, then the next one, in perpetuity. It is of no consequence whether solutions are in sight today, because the peer production model will keep iterating until they are.Thus as problems appear, such as the uselessness of Bitcoin for actual transactions, fixes such as the Lighting Network are layered on top of the inadequate underlying technology. And when problems are found in the Lightning Network, such as the difficulty of routing leading to the emergence of centralized "banks", another layer will be created. Layering can be an independent activity, whereas fixing the underlying technology involves obtaining consensus from its governance structure. The history of Bitcoin's blocksize shows how difficult this can be.