Analyzing Vitalik Buterin’s Statement About Cryptos that are “Centralized Piles of Trash” Analyzing Vitalik Buterin’s Statement About Cryptos that are “Centralized Piles of Trash”
🚨 This article is 5 years old...

Analyzing Vitalik Buterin’s Statement About Cryptos that are “Centralized Piles of Trash”

Analyzing Vitalik Buterin’s Statement About Cryptos that are “Centralized Piles of Trash”

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

Co-founder of Ethereum, Vitalik Buterin, spoke at the Blockchain Connect Conference in January about Casper CBC and Ethereum 2.0, but also criticized blockchains that boast their speed, calling them “centralized piles of trash” at the Blockchain Connect Conference in San Francisco.

The audience asked Vitalik whether CBC Casper was being designed with any transaction throughput goals in mind, hinting at proof-of-work’s notoriously slow speed of transactions.

The conference was aimed at academics, boasting that it’s a gathering of the most “authoritative blockchain professors around the world.”

In response, the Ethereum co-founder took aim at other blockchain projects that he labeled as “bad” because of their focus on speed rather than safety, stating outright that such projects are centralized.

“There are honestly a lot of bad crypto projects that are trying to claim ‘oh because we use fancy BFT [Byzantine Fault Tolerance], we can have 5,000 transactions a second and proof-of-work can only do 15,’ Buterin said.

This implicitly calls attention to blockchains such as EOS, TRON, and NEO, which either use delegated proof-of-work or delegated Byzantine Faul Tolerance (dBFT). These projects use less than 30 nodes, with NEO having the fewest nodes at 7.

“There are a lot of horrible misconceptions inside of that because the purpose of a consensus algorithm is not to make a blockchain fast. The purpose of a consensus algorithm is to keep a blockchain safe.”

Because of how proof-of-work works, even if the number of nodes is increased, the speed at which transactions processed tend to stay the same, or even decrease. This is because miners are searching for the solution for the next block in parallel. When network propagation time, the time it takes for a message to travel through a network, is added, it takes even longer.

Consequently, only a tiny amount of the computing power used for mining is actually used in processing transactions. Overall, these bottlenecks make it difficult to increase throughput in a proof-of-work model.

“A lot of the time, when a blockchain project claims: ‘we can do 3,500 TPS (transactions per second) because we have a different algorithm,’ what we really mean is we are a centralized pile of trash that only works because we have seven nodes running the entire thing.”

It is a strong statement from Buterin, who is taking aim at some of Ethereum’s competition which is not using a proof-of-work consensus. Thus, these projects are able to achieve impressing-sounding transaction throughput because these networks are, in Buterin’s mind, centralized. Watch Vitalik Buterin’s full talk below:

Scaling Without Centralization

Instead, Vitalik believes that the way to increase speed should be via layer one sharding and layer two scaling options. According to him, this retains the security that comes from consensus while increasing throughput:

“There are good ways of making a blockchain fast, and I would argue that the main good candidates are basically layer one scalability through sharding and layer two scalability through channels, plasma, and lightning network and ZK rollup,” Buterin added.

At the moment, Ethereum is in the process of readying itself for a move from its proof-of-work algorithm, Ethash, to a proof-of-stake one with this its Casper upgrade. However, it is seen as somewhat different from other proof-of-stake algorithms because Casper has processes to punish malicious actors in a decentralized way.

It will be interesting to see whether Ethereum will be able to exceed its 15 transactions per second limit after Casper and proof-of-stake is implemented.

Mentioned in this article