Bitcoin : Thoughts on scaling with larger blocks and universal blockchain pruning

Bitcoin : Thoughts on scaling with larger blocks and universal blockchain pruning

While Bitcoin Core is experimenting with payment channels that will use the blockchain as a trusted intermediary to open and close channels, with a throughput theoretically limited only by latency (if it works at all), the scaling solution chosen by the Bitcoin Cash community is increasingly larger blocks to track demand and scale as the network grows. That, however, creates new problems.

**The first problem is block propagation**. Every time a node wins the proof-of-work race, it has to broadcast it’s block for validation and acceptance by the other nodes, which in practical terms involves uploading files through the p2p network. Larger blocks, therefore, would result in larger files to upload, and a block size limit is defined by the mean/median network bandwidth.

If I remember correctly, the “compact blocks” implementation used by both Bitcoin Core and Cash can handle up to 98% compression, which is great and allows for a tremendous increase in block size without fear of clogging up the network, but a scalable future-proof payment system must be able to theoretically handle millions of transactions per second, which would require a bandwidth measured in the Gbps.

The solution to this problem, instead of hoping that Gbps connections will be commonplace by then, is improving the compression algorithms. One such improved algorithm is Graphene, which can handle up to 99.8% compression, requiring a bandwidth measured in Mbps for a throughput in the millions tx/s.

**The second problem, and perhaps the most discussed one regarding future centralization problems with Bitcoin Cash, is the storage size of the blockchain**. While it sits at around 160GB right now, with increased demand and larger blocks comes an exponential increase in the blockchain database size.

One million tx/s (i.e. 600 million transactions per 10-minute block), averaging 250 bytes per transaction, would result in 150GB blocks, and an increase of around 7800TB in required storage per year. This is hardly possible for most nodes, even with the expected Moore’s Law increase in consumer grade SSD, unless we experience some technological breakthrough allowing seemingly endless storage capacity. Once again, this is not something we should expect to happen, but rather work around it to ensure a continued decentralization of full nodes on the network.

One solution to this problem is “pruning” the blockchain, as a means to ditch spent outputs from the blockchain and only keep the unspent ones. ~~However, this is only a solution to lightweight/pruned nodes (which don’t relay previous blocks or Merkle paths) since it’s impossible to actually prune the blockchain, because any change in past blocks will require a change in the subsequent blocks, and the new pruned blockchain would only be valid if we employed a massive amount of computational power to solve the proof-of-work puzzles of each individual block since Genesis. Hardly an easy solution.~~

~~A work-around to mining the pruned blocks would be to hard fork the network to allow for a completely invalid blockchain up to the “pruning block”, from where valid blocks would continue to be built on top of. This has numerous problems, but the biggest might just be that the community will never allow for nearly 10 years worth of blocks to suddenly go invalid for the sake of scaling, and thus the hard fork would never gain traction.~~

EDIT: In reality, nodes running with a pruned blockchain keep track of the UTXO set (i.e. every unspent transaction outputs and their associated public keys) and erase every transaction up to a certain amount of blocks, keeping only the block headers (including the merkle root) and the merkle branches. This allows for any participant to request the merkle path to a certain transaction and locally verify it’s authenticity without downloading the actual blocks. However, there are situations where one must validate the blocks themselves instead of trusting the nodes it’s connected to, and since each block references a block which you can’t expect to be valid without checking, one ends up needing to validate every previous blocks until all coinbase transactions are accounted for, or even all the way back to the Genesis block, so it’s very important that there exists full nodes running the blockchain in it’s entirety.

**I propose a solution to this problem with a “net settlement hard fork”** that would consolidate the complete UTXO set in the blockchain and render the previous blocks unnecessary to ensure the security of the system.

On a predetermined block, similarly to how the reward halvings are scheduled, a special block would be mined, which I’ll refer to as Exodus (actually, Exodus#1, as more of these special blocks would have to be mined from time to time). This block would contain “net settlement” transactions of all unspent outputs back to their rightful public keys. For example, the wallet that received the first coinbase transaction ever, likely belonging to Satoshi Nakamoto ([1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa](, has 1297 unspent outputs as of now, most of which are donations intended for Satoshi. On the Exodus block, all these outputs would be combined and sent back to that wallet as one net settled transaction of 66.87633136 BTC. Since every valid transaction requires a signature from the private key holder, these transactions would all be considered invalid on any other block except Exodus, the special block where the community has agreed to overlook this lack of signatures.

In order to prevent malicious actors from trying to steal some of the outputs from other people or even create new outputs and send them to themselves, the hard fork would be programmed far in advance, so all miners can create their own [identical] Exodus block independently, incentivized to do so because of the expected regular block reward (`block_reward`) and the Exodus block reward calculated by adding every individual transaction size (`tx_size_i`) and multiplying by some agreed upon maximum transaction fee (`max_tx_fee`). Therefore, the total coinbase reward for mining that block would be `block_reward + max_tx_fee * Σtx_size_i`.

This block would have an unlimited size, and define the end of the “first blockchain era”, from Genesis to Exodus, which can be kept by anyone who chooses to do so. Since Exodus contains all the net settled transactions from the historical blockchain, full nodes aren’t required to keep any previous blocks in their entirety to ensure transaction validity, effectively ditching several GB/TB of required storage.

Every new transaction referencing the old blockchain would need to be rejected, until every wallet is fully synced and starts to reference the Exodus transactions instead, which shouldn’t take long, but I might be understating how disruptive this could be. Subsequent blocks could have their maximum block sizes automatically adjusted similarly to how mining difficulty is adjusted now, without worrying over the database size. New “net settlement hard forks” would have to be performed periodically to ensure a limit to the blockchain size, similarly to how Monero ensure periodic upgrades to their protocol (it hard forks every 6 months).

This would ensure that throughput can increase as needed, and the database would be regularly pruned to ensure proper decentralization of full nodes.

Feel free to criticize the idea, and perhaps point me to better scaling solutions that I might be overlooking.

View the link


Bitcoin is a distributed, worldwide, decentralized digital money. Bitcoins are issued and managed without any central authority.
FindCrypto scans the web for the latest Bitcoin news, so you can find all the latest and breaking news in one convenient location.

Author: ViaLogica

Score: 9

Don’t forget to share the post if you love it !

Ethereum : Purchase Web Hosting + Domains with ETHEREUM!

Bitcoin : But Why? Crypto Looks for Answers As Facebook Eases Ad Ban