Decentralization, what is it good for?

Decentralization is a means to an end. In the case of blockchains, decentralization typically achieves two ends: double spending prevention and censorship resistance — these map, respectively, to the properties of safety and liveness of a consensus algorithm. Consensus algorithms do not require decentralization, but they do require an honest majority. Decentralization helps maintain an honest majority insofar as it forces smaller bad actors to collude with each other or corrupt many honest participants to attack the network.


These are relevant functions of decentralization. The goal is not to create a “participative system” for the sake of it but to create a safe and resilient system. Unfortunately, decentralization is a pretty poor tool for the job:

  • decentralization implies redundancy and thus higher costs
  • economic efficiency pushes the system towards centralization, and decentralization generally only survives due to friction
  • relying on parties not colluding is not great as far as security assumptions go. I trust the difficulty of the discrete logarithm problem in elliptic groups far more than I trust a bunch of random people not to be tricked or coerced into attacking the network

And yet, and yet… decentralization is all we have. It would be fantastic to obtain safety and liveness properties from mere cryptographic assumptions, but that is not the case. Mathematical primitives live in the abstract work of statelessness; ledgers and blockchains live in the concrete world of statefulness, hence the omnipresence of the human element in their security models.

Some good news: double-spending attacks by validators are a limited concern. These attacks must generally target exchanges or bridges because these are fast and automated. It’s tough to commit other types of fraud by corrupting the chain to cause a double spend. Exchanges have means of protecting themselves, especially if the chain offers quick finality, which only leaves cross-chain bridges as sitting ducks.

To be sure, if merchants couldn’t rely on transactions being final, this would render the use of these systems moot, but the vast majority of transactions are too small to be worth launching a double-spend attack on the entire chain. Moreover, in a world where most people are connected most of the time, the presence of forked blocks more than a few hours in the past is confusing to essentially no one. To be clear, double spending attacks are not a major concern, because of the security properties of the protocol. The chain would be useless if double spending were easy, costless, and could happen willy-nilly. In fact, if you do not have a double spending problem, you generally do not need a blockchain at all.

Moreover, these attacks, if and when they happen, can be quickly detected and mitigated. In proof-of-stake systems, the perpetrators can be eliminated from the pool of participants, they can be severely punished, and the entire economy can benefit due to the deflationary pressure of the slashing event. From that standpoint, the security properties of proof-of-stake are far more desirable than those of proof-of-work, despite the popular wish to see tradeoffs in everything.

Realistically, complete liveness attacks that result in halting the chain aren’t a major concern either. They, too, can be remedied with slashing and restarting. The main arguments against this are aesthetic, not pragmatic.

The big, thorny issue is the selective censorship of transactions. That’s the one to worry about, the property most crucially dependent on decentralization. More on this in a little bit.

These observations lead to a few conclusions:

  • Cross-chain bridges are brittle things, even when they use light-client security
  • It’s best to not depend on decentralization if you can at all avoid it
  • Not depending on decentralization for censorship resistance would be great
  • Turning safety attacks and censorship attacks into full-blown liveness faults is a good thing, actually

The first point is an argument for rollups, which offer the possibility for multiple independent ledgers to bridge with each other without any loss of security (no honest majority assumption is required, unlike light-client bridges in, say, Cosmos)

The second point is an argument against depending on decentralization to preserve the ledger’s integrity. Optimistic rollups rely on including challenges in the chain for their security, which means validators can collude to censor those challenges and steal assets from rollups. Fortunately, such a censorship attack can be readily observed. If the finalization period for rollups is long enough, and the rollups are enshrined in the protocol, this gives the possibility to hard fork out of an unlikely concerted attack. Again, arguments against such approaches are primarily aesthetic, not pragmatic. Polkadot parachains, similar to rollup in many ways, can transfer assets to each other very quickly, but this comes at the cost of trusting the validators not to collude to steal assets because there is no period in which censorship can be mitigated.

The third point is an interesting open research problem. Can we stop censorship on a ledger controlled by bad actors? Perhaps. As mentioned above, censorship can be readily observed. A user posting a transaction can make it quite clear to everyone running a node that they have indeed assigned this transaction, and if that transaction fails to be included in a block, they can infer that censorship took place. This is because, in practice, we have synchronous channels for honest actors to coordinate for sufficiently large synchronization times. But is it possible? One scheme to convert a censorship attack into a liveness attack takes the following form:

Assume a transaction is emitted at time \(t\), and seen by all nodes at time \(t + dt\), where \(dt\) is suitably large to not worry about network latency, partitions, etc. Node \(i\) sees the transaction at time \(t_i \in [t, t + dt]\). If the transaction is not included at time \(t_i + T\), for some grace period \(T\), then nodes will probabilistically reject blocks that do not contain the censored transaction for each subsequent block. They do so based on a common draw, meaning they will all reject at once or all accept, so long as they agree that time \(T\) has passed.

If the probability of rejection is high, there is a risk that nodes will reject a block at a time when there is disagreement on whether \(T\) has elapsed. If the probability is low, then the risk of disagreement is lower, but it will take a long time before the chain halts for lack of inclusion. Typically, the probability should be inversely proportional to \(dt\). To be sure, this is not a very good solution. It’s slow and cumbersome, and it would likely only make sense to apply it to fraud proofs or similar critical operations. However, it is a nice proof of existence that there indeed are schemes that can turn censorship by block producers into liveness faults.

In summary, despite its cost, decentralization remains an essential tool for blockchains, particularly for ensuring censorship resistance. While maintaining or improving decentralization is a great thing, there are promising research avenues for minimizing the reliance on decentralization, and both should be conducted in parallel.

comments powered by Disqus