It doesn’t take much to bring out tribalism when it comes to comparing blockchain technology. I have been working on blockchain tech since 2009 and one of the things that I find helpful is to consider all of the design tradeoffs that people can make. It isn’t as simple as “fastest”, “most scalable”, “most decentralized”, or “best governance”. This post will look into some of the less frequently thought about issues when it comes to picking which blockchain technology is best for your application.
Trusted vs Untrusted Governance
There are many different kinds of governance systems in play on various blockchains, and not all governance systems are suitable for building trust. For example, Delegated Proof of Work (e.g. Bitcoin & Ethereum) is a voting system whereby mining pools get to determine what subset of valid transactions are included in blocks. The nodes producing the blocks are not presumed to have any “higher level of trust” than anyone else and therefore they can “do no harm” by virtue of producing a block.
On Delegated Proof of Stake systems the block producers are elected by token holder voting. There is a presumption that these elected nodes can be trusted with certain things which could bring down the network if that trust were violated. Two big examples from EOS include:
1. Being an oracle for computational runtime (1 bad actor could stall the network by lying about the lack of a infinite loop or under-billing)
2. Deploying system contract updates (⅔+ required to harm network)
In a network where anyone can propose a block, all validators must have an objective measure of CPU billing time. This is how Ethereum works, but it isn’t without consequences of slower performance and attack vectors when there is a mismatch between their simulated objective billing and real world CPU time.
Other proof of stake systems, such as Ouroboros, allow any account to produce blocks by simulating mining with staking. This fundamentally limits their smart contract systems to function like Ethereum with objective resource counting. When you have an open set of producers with no “trust gate” then your code must make compromises in performance that could be avoided with a “trust but verify” system like DPOS.
Your choice of consensus algorithm impacts a lot more than just how you reach consensus.
A challenge faced by all blockchains is how does a user ensure that their “valid transaction” can actually get recorded on chain without interference from others. In principle the more “independent” and “non-collusive” entities that are producing and confirming blocks the greater chance you can find one of them which will include your transaction. Worst case you may have to produce your own block.
The story of censorship resistance doesn’t end with the hand waving answer of “you cannot be censored if you are willing to buy mining hardware”. The only sure way to avoid being censored is to have 51% of mining power. Without 51% of mining power, the mining pool operators can simply ignore any block that has a transaction they want to censor. This means that there is inherent “trust” created by bitcoin governance where hardware owners (aka voters) pick mining pools that are less likely to censor.
In this respect, Delegated Proof of Work and Delegated Proof of Stake are both implementing a form of “trusted governance” to the extent that hash power and tokens are widely distributed among a large enough number of independent voters. Once the voters vote, however, it only takes 3–4 representatives (pools) to censor a bitcoin or Ethereum transaction and it takes 8 or more to halt a DPOS chain in protest of a specific valid transaction.
Objective vs Subjective Finality
Proof of work blockchains as well as some proof of stake chains lack objective finality. Instead they present a “high probability of finality” that grows over time. We could say that the following chains have Subjective Finality:Bitcoin / Ethereum (Delegated Proof of Work)Bitshares / Steem (Delegated Proof of Stake)Cardano (Ouroboros)
The following chains have objective finality:EOSIO (BFT DPOS and BOS)Certain HyperledgerHashgraphXRP
The byzantine fault tolerant algorithms require a closed set of known validators to reach finality, as a consequence if ⅓ of this known set is shut down then they cannot reach finality. With Subjective finality there is always the potential for someone to produce evidence of a better chain that will cause you to abandon your current chain.
Open Entry systems tend to lack finality and any kind of “earned trust” so they are limited by both performance, governance, and latency.
Ease of Inter-blockchain Communication (IBC)
Your choice of blockchain technology and consensus algorithms can impact what kind of IBC is possible and the speed of that IBC. To see this in action, consider attempting to write a smart contract on EOSIO to process bitcoin headers and validate Bitcoin transactions. At what point can your smart contract consider the Bitcoin transaction final? There are any number of cases where even after 100 blocks the chain could reorganize. Any number of confirmations you pick carries with it a risk that it could be undone.
Now suppose you take an immutable action on another chain that has finality based upon IBC from a chain without finality. In practice IBC with chains that lack objective finality must wait a very long time to reduce the risk of a chain reorganization from invalidating assumptions. That or your Bitcoin deposit smart contract must have some way of mitigating damage if a deposit gets undone after 6+ confirmations.
IBC among chains with subjective finality may be possible, but God help you if the communication is two-way. Two subjective finality chains talking to each other require latencies similar to talking with deep space probes with round trip times measured in hours or days.
IBC among chains with objective finality can be performed in seconds.
Lastly, just because it is theoretically possible for two chains to communicate doesn’t mean it is easy. The ease of communication depends in part on how easy it is to build a light client to another chain as a smart contract. This in turn depends upon the complexity and volume of the headers and merkle proofs as well as the robustness and performance of the smart contract language. Too much overhead or too little power in the smart contract can kill the potential for IBC.
For an example of this, consider how much easier it is for EOS to emulate Ethereum than for Ethereum to emulate EOSIO!
As the debate over consensus algorithms and decentralization rages on, it is imperative that intelligent observers demand that the full price of all technological tradeoffs is accounted for. What good is a “more decentralized open-entry consensus algorithm” if it means you have a blockchain that has subjective finality and high latency inter-blockchain communication and no ability to leverage “trust but verify” optimizations in the governance layer?
On the other hand, there are risks to algorithms that offer finality as well.
Remember “All Blockchain-Magic comes with a price”, be sure you read the fine print before you commit your organization to any particular smart contract platform.