General Protocols: Opinion on BCH Maxblocksize Scaling

4 237
Avatar for GeneralProtocols
10 months ago

Background

The primary motivation for BCH's forking event in 2017 was an impasse in increasing the blocksize maximum, so the relevance of further blocksize increases to accommodate transaction volume needs no introduction to the BCH community, a community focused on getting to global usage on the L1 blockchain. Maximum blocksize discussed for the rest of this writeup is defined as the maximum size of a mined block, beyond which it'll be rejected by the majority of the network both by hashrate and economy. Note that there is an independent variable "soft limit" that is self-imposed by the miners and is strictly below maxblocksize, that is not relevant to this discussion.

We have had two one-time increases to this number in the past:

  1. From 1MB to 8MB at the initial fork in 2017, and

  2. From 8MB to 32MB in 2018.

The 32MB limit has not been moved since 2018, and demand has not been high due to slow growth in usage. While short bursts of "stress tests" that were conducted explicitly to challenge the limit were done from time to time, average long-term blocksize has been well below 500kB. It is important to note that in 2021 the default non-consensus "soft limit" shipped with BCHN has been increased from 2MB to 8MB, which has proven useful in accommodating some burst scenarios.

Problem statement

While average usage today is well over two orders of magnitudes away from challenging the current 32MB maxblocksize limit, two factors make it desirable to address it today:

  1. One time increases in maxblocksize are an ongoing and unpredictable effort. While the CHIP process offers some stability and transparency to the effort, it nevertheless subjects the network to regular episodes of uncertainty regarding what some would consider its raison d'etre. Putting a predictable, sane plan into action reduces that uncertainty and increases confidence for all parties - users, businesses, infrastructure providers and developers.

  2. In the event of rapid adoption, the social makeup of BCH's community can inflate and diversify rapidly, destabilizing efforts to address the problem, possibly resulting in a chaotic split as witnessed with BTC in the past. A plan adopted right now will carry with it the inertia necessary to combat such destabilizing tendencies.

Considerations

Some crypto enthusiasts, using a Satoshi quote, correctly note the mechanistic ease of changing the maxblocksize in the code while missing important impacts beyond changing a single number:

  1. On the low side, a small maxblocksize, even when the blocks are not congested, may deter commercial usage and development activity. This is due to the fact that business and development investment are long-commitment activities that often span months or even years. If entrepreneurs and developers cannot be offered confidence that the capacity will be there when they need it, they are less likely to make the investment of their precious time and money.

  2. On the high side, a maxblocksize that is too large for current activity invites adverse, unpredictable conditions that typically consist of short bursts of noncommercial traffic that push the limits. The network impact of these activities is more subtle: they generate additional, volatile cost for infrastructure and service providers that may be difficult to justify. It is important to note that contrary to intuition, most of the cost to operators come from human operation and development complications, followed by processing power that scales with sizes of single blocks, with storage and bandwidth costs coming a distant last. We have observed this phenomenon in certain other cryptocurrencies, where very high throughput that did not come from commercial activity ultimately resulted in businesses ceasing to operate on their chains, reducing the network's overall value. It is important to note that we do not view all existing operators' continued existence as sacred; rather, we take the reasoned view that increased investment in infrastructure should be justified by corresponding commercial, value generating activities.

  3. Historically, changing the maxblocksize comes with a heavy social cost each time it happens, with the risk of community and network fracture. Satoshi's quote makes sense back in the days when he made the decisions by himself, less so today when the majority of the network needs to come to consensus. A longer lasting plan up front that minimizes each of these potentially centralizing decision points can make the network more robust.

In short, the aim of a good scheme regarding maxblocksize adjustment should offer the maximum amount of *predictability* to all parties: users who want steady fees, developers who want stable experiences, entrepreneurs who want to reduce uncertainty in growth, and service providers who want to minimize cost while accommodating usage.

Alternatives

With the criteria stated above, let's examine some alternatives:

  1. Outright removal of consensus blocksize limit: The general purist argument is that miners would resolve any disagreements on their own without a software imposed limit. In reality, without an effective way to coordinate an agreement, each node can have vastly different capabilities and opinions on the sizes that are tolerable. The result is therefore either network destabilization and split without coordination, or opaque, centralizing coordination outside the protocol. Neither scenario are likely to offer confidence or stability.

  2. One-time increases to maxblocksize: While extremely simple in execution under the BCH context, as described above it subjects the network to regular episodes of uncertainty and social cost, and thus is less ideal for long term growth. At every manual increase, concerns of all parties have to be reconsidered, sometimes under adverse social conditions without the benefit of inertia.

  3. Fixed schedule: Have the maxblocksize increase on a rigid schedule, such as BIP101 or BIP103. Also simple in execution, these schemes additionally offer a possible scenario where if demand roughly stays in line with the schedule, no manual adjustment is needed. It is impossible to perfectly predict the future though, and such schemes will inevitably diverge from real world usage and cost, requiring frequent revisits to their parameters. Each revision can incur larger social costs than even one-time increases due to the complexity of schedules as opposed to just sizes.

  4. Algorithmic adjustment based on miner voting: Adopted by Ethereum, the scheme proposes that miners (and pools, by proxy) vote for the maximum block capacity on fixed intervals, with the result tallied based on a fixed algorithm that then adjusts maxblocksize up or down at the next period. While this scheme can work well with a well-informed and proactive population of pools, our current observation is no such population exists for BCH - miners and pools typically only intervene when a crisis happens, which may not be ideal for user confidence. BCH is additionally a minority chain in its algorithm, which may complicate incentives when it comes time to adjust maxblocksize.

  5. Algorithmic adjustment based on usage: Multiple attempts exists, including an older dual-median approach and a newer, more sophisticated WTEMA-based algorithm. These schemes generally aim to algorithmically adjust maxblocksize based on a fixed interpretation of past usage in terms of block content. While far from perfect, we see these schemes as our best path forward to achieve reasonable stability, responsiveness, and minimization of social cost for future adjustments.

Criteria of a good algorithm

In our opinion, a good maxblocksize adjustment algorithm must address the following concerns:

  1. For predictability and stability in service operators, any increases must happen over a long window. We have observed some adjustment algorithms where it's possible to double maxblocksize over a matter of hours or days - the volatility they allow reduces the utility of an algorithmic approach.

  2. The algorithm should aim to accommodate commercial bursts such as holidays, conventions, and token sales, such that user experience is not impacted by fee increases in the vast majority of times. Note that while a rapid-increase algorithm can satisfy this for a user, it'll conflict with # 1 above in that it does not offer a predictable, stable course for operators - it is therefore likely preferable to just keep a healthy maxblocksize with a large buffer well above average usage.

  3. The algorithm should aim to reduce costs for operator in times of commercial downturn. It is inevitable in BCH's many more years and decades of operation that it'll see ups and downs, and it's important that higher operating costs justified during boom times do not unreasonably burden services during the bust years. During a long downturn, a reasonable limit that defends well against unpredictably high bursts of costs (see "Considerations" above) can mean the difference between keeping or losing services. Such adjustments can happen slowly, but should not be removed altogether.

  4. The algorithm should be well-tested against edge cases that may cause undesirable volatility. This is especially important considering the history of BCH's difficulty adjustment algorithm, which was plagued by instability for years both in the Emergency Adjustment era of 2017-2018, and fixed-window-based era of 2018-2020. Blocksize algorithms must learn well from this experience and aim to minimize potential vectors of trouble.

Additional notes on miner control

Some may say that usage-based algorithms take control out of the hand of miners; in our opinion this is not true. Miners today have an additional control vector in the form of a "soft cap" that allows them to easily specify maximum size *for the blocks they themselves mine* that is below network-wide maxblocksize. Adjusting this cap allows them an input into any usage-based algorithm, as the algorithms depend on the size of past blocks actually mined.

It is also important to stress that while the quality of any algorithm adopted must be very high, it is not necessary to be perfect. A large part of the value of the algorithm is that it relieves social costs going forward. In the case where an algorithm is found to need adjustment or even determined to be inadequate, it is certainly possible for the ecosystem to change it - through a CHIP or other possible systems - just like any other consensus rule.

Author: imaginary_username

General Protocols Blog

This article forms part of theĀ General Protocols Blog, a collection of cross-platform links showcasing our team's community activity, Bitcoin Cash projects, UTXO development, and general crypto musings.

9
$ 2.10
$ 0.74 from @TheRandomRewarder
$ 0.50 from @lugaxker
$ 0.50 from @Fristi
+ 2
Avatar for GeneralProtocols
10 months ago

Comments

Such a well written article. The best I have read on the topic.
Very well-explained. I understood most of it despite my limited technical background.
Thank you imaginary_username and GP! One thing I didn't understand: in "considerations", you mention that "most of the cost to operators come from human operation and development complications".
Isn't development for 1000 transactions/day the same as for 1 million? or am I totally missing the point here? :)

$ 0.00
10 months ago

Operators face different class of challenges as the scale increases. For example...

The BCH network's pre-eminent open indexer today is Fulcrum. It has not always been the case - we used to run straight Electrumx, which was a fork from Electrum-server before it. Electrum-server was a sluggish as hell python server, but it fulfilled its purpose back in the days.

Then came the big blockers, and it was anticipated that electrum-server would no longer work for 8MB+. So significant effort went into optimizing it into Electrumx, effort that were not needed if the scaling stayed the same. But it was worth it!

Then came a time when buildup over the years as well as 32MB blocks made Electrumx itself less and less adequate. We could make Electrumx multithreaded, or move to a more easily scalable language that's not python, but someone had to do the job! Several intense months later involving some of the best talents in BCH, Fulcrum was born - it was sleek, it was robust, everyone loved it. It would probably work all the way up to 1GB+ on a good SSD.

But beyond that? Who knows! And we're just talking about a straight up indexer here. What about the custom usecases? Wallets that need to be managed? Privacy solutions? Chain monitoring services? (Bitpay runs one inhouse to deter doublespending) Things get complex quickly, and complex things demand more effort when scaled, and they break more easily, and broken complex things are harder to fix when big than when small.

You get the idea.

$ 0.00
10 months ago

I will add - when a string of stress test blocks at an untested 1GB break a pool's software for the 3rd time this month, and the engineer, called in yet again at 2am to fix it, says "I'm not coming in next time"... that's how you lose critical pieces of your ecosystem.

$ 0.00
10 months ago

Got it! Thank you so much for the detailed answer..

$ 0.00
10 months ago