Mythbusting: big blocks => zero fees => infinite spam

3 1003
Avatar for btcfork
Written by
This user is who they claim to be.
We have manually verified this user via some other channel.
4 years ago

Article is also available in: ⬤ Español ⬤ português

The myth of infinite demand for free blockspace

"The demand for externalized-cost highly replicated external storage at price zero is effectively infinite." - gmaxwell, #bitcoin-wizards IRC channel, 16 January 2016 18:05

This is a very pernicious myth that since been variously exposed. However, it was a very successful in damaging the case for increasing the Bitcoin block size, because it worked quite well psychologically - it appealed to fear and an unknown future. Fear of everyone's disks running full, fear of nodes crashing, an apocalyptic end of the Bitcoin network at the hands of an Internet that couldn't get enough of ever more data.

Propaganda [1] needs to at least sound true on its face, and the quote by Greg Maxwell certainly rings true - except it rests on one little falsehood - at best a wrong assumption - that makes it collapse into untruth.

That fallacy is that if we were to substantially increase the maximum block size, fees on the network would necessarily fall until they'd be effectively zero - at least so infinitesimally low that storing large amounts of data on Bitcoin's distributed network of nodes would become the most cost effective way of storing data.

Storing large amounts of non-transactional data has - traditionally - been a use case discouraged on Bitcoin. There are good reasons for that, and if it's not already been done, then at some point someone will have to write a thorough article to explain why it is a bad idea and show that the designer of the system (Satoshi Nakamoto) did not encourage it.

To explore the veracity of the present myth, let's look at what we have actually seen happen in the real world, historically and measurably on the Bitcoin-derived blockchains.

For this article I will take a look at the main ones BTC, BCH and BSV.

But I encourage you to take a look at some of the other forks of BTC or BCH that occurred which retained at least part of their parent blockchain data. Forks such as Bitcoin Gold, Bitcoin Diamond, Bitcoin Candy and BTCC. We can reflect on whether their blockchain histories support the myth - or not. The same technique may be applied to the swathe of altcoins that have very low transaction fees.

Busted by reality - the history of block sizes and fees on Bitcoin forks

Bitcoin (BTC):

Let's begin with the granddaddy. It's still alive after 10+ years! Must have been doing something right, or not? What can we learn from it?

A historical prelude about how the blocksize limit came about on BTC

Most of us are aware that BTC has for a long time had a very restrictive upper limit of 1MB per block in place. There was a great "war" in the Bitcoin community about whether to raise this limit, how, and by what measure. For those who didn't know, that "Great Debate" [2] is the context in which the myth of this article, among many others, originated.

The 1MB limit

Satoshi introduced a 1MB limit on Bitcoin in 2010 Q2/Q3 to protect against a DoS (Denial of Service) attack through spamming the chain with transactions.

Before that, there was no codified blocksize limit in BTC. However there were other technical limits in the construction of the software which would have acted to prevent overly large blocks. With the exception of the network message size limit of ~32MB, the others were not really intentional, so the "original" Bitcoin could not be said to have been intentionally limited to blocks smaller than that.

As Bitcoin's price around 2010 was negligible, it practically cost a sender nothing to issue transactions and have them mined. "Spam" would have been essentially free.

Satoshi foresaw this possible problem, and decided to put a damper on it in two separate commits, one where he introduced a maximum block size, and another where he added it as a consensus rule check. He introduced it a bit surreptitiously, but shortly afterwards told others that the limit could be removed in future, demonstrating that he believed it may become unnecessary in the future.

The best explanation since for why exactly he added the limit, is possibly the one given by Mike Hearn in his post "The Capacity Cliff" [3]. I quote the relevant extract:

Source : "The Capacity Cliff", by Mike Hearn

It's worth considering that 1MB was a size already far below what could technically be considered "large" in data storage terms in 2009/2010. I don't remember exactly when I stopped using 1.44MB floppy disks, but it was a long time before that, and my PC's hard drive at the time could already store a few hundred thousand of such 1MB "floppy units". It would have ample space to store the Bitcoin blockchain even with larger blocks, and I would've been able to upgrade to the Terabyte-sized disks that came out a few years later just fine. Unfortunately I didn't get into Bitcoin early enough ;-)

Today we understand that the more hazardous aspect of large blocks was not due to storage, but because bigger blocks could require disproportionately more processing due to inadequacies of the early Bitcoin software.

The processing time required for validating blocks could in certain cases increase more than linearly with the size of the block. Certain kinds of malicious transactions could make this problem even worse, potentially causing a single block to take up more time than 10 minutes to validate even on decent hardware! If this were to happen it could have severely disrupted the network, since Bitcoin nodes run on the assumption that block processing takes less than 10 minutes on average. This is the so-called "target time" which the system tries to steer towards using difficulty adjustments.

Imagine if your blocks started taking longer than 10 minutes to process on average, and the system lowered the difficulty in response. This could be a runaway vicious cycle of dropping difficulty!

Software fixes were eventually constructed to correct these scalability gremlins, but Satoshi knew that the software was not perfect, that downloading the whole blockchain also would take longer and be a danger until SPV nodes were developed. That would all take time to improve, so the 1MB limit was introduced as a stop-gap. Today there is Bitcoin software which has addressed most or all of these issues, safely allowing much bigger block sizes than 1MB.

Segregated Witness

BTC's base capacity of 1MB was augmented in late 2017 with an soft-forked extension block feature called Segwit (Segregated Witness). This raised BTC's effective capacity by a small amount, although nothing to really get excited about.

BTC's effective capacity improvement is somewhere between 2-3, depending on the mix of transactions. It is certainly less than the hard theoretical upper bound of 4MB for the maximum size of base+extension block.

Most of the extra capacity offered by SegWit isn't practically used yet since it depends on the percentage of transactions that use Segwit. That adoption hasn't been quick or very extensive. But SegWit was not introduced primarily to relieve the limitation in blocksize. Instead, its main goal was to fix malleability so that building the Lightning Network (LN) would be simpler.

Built on SegWit's features, LN was to be the El Dorado of scalability in BTC, promising near infinite transactional capacity without quite managing to explain how that would be technically feasible at all on a small-block substrate. Critical observers noted that the original LN whitepaper spoke up 133MB Bitcoin blocks as a guide, to be able to onboard a substantial number of the world's population. This figure was later removed from the official Lightning documents.

So what can we infer from BTC blockchain history?

Figure 1 - Source: https://bitinfocharts.com/comparison/size-btc.html

From 2009 to about mid-2012, blocks on Bitcoin were very small. Word about Bitcoin had been spreading, and while the price was still low (a few USD), it started to attract some interest by more early enthusiasts. The network was growing freely as the network capacity very much exceeded the demand for block space.

Number of users and demand for block space increased steadily from 2012 onwards. Around 2015-2016, some of the Core Bitcoin developers took their concerns about hitting the block size limit to the public [3, 4, 5].

The development community had already been split along the divide of whether the limit should be increased as Satoshi had suggested, or not.

Let's have a look at historical fee rates on BTC. The chart below shows the average fee per transaction, in USD.

Figure 2 - Source: https://bitinfocharts.com/comparison/bitcoin-transactionfees.html

Do you see what I'm seeing? You may need to look at the source graph in more detail to verify this, but transaction fees on BTC were extremely low (of the order of a few cents) up to about mid 2016. They started to rise slowly during Q2 2016 and then much more strongly in 2017, peaking around the end of 2017 / beginning of 2018.

Here is another graph on the historical fee rate, for comparison. This one is over the last 5 years (starting from 2015-01-01), and the fees are in satoshi/byte, and so the fluctuations are not as pronounced as in the previous graph, which takes into account the multiplication with sharply rising historical USD price of Bitcoin in 2017. Nevertheless a similar pattern of escalation towards the end of 2017 is visible. The period of turbulence in 2017 Q2-Q4 is at least partly due to the creation of Bitcoin Cash (BCH) - a rival fork which presented a major alternative to BTC, and certainly drained some demand and hashrate away from it, temporarily driving fee spikes on BTC. BCH initially also caused oscillations of hashpower and demand on both sides of the networks, which were largely stabilized by an upgrade on 15 November 2017.

Figure 3 - Source: https://statoshi.info/dashboard/db/transactions?panelId=3&fullscreen&from=1420151148217&to=1575585127124

Let's get back to the myth though.

Looking back on the history of BTC up to 2016-2017, and thinking about the suggestion that extremely low fees would invite infinite external demand for block space... Where did that happen? It didn't!

BTC blocks didn't fill up "overnight", even though there was plenty available blockspace (at least up to the soft limits of ~750kB used by miners, later raised to the hard limit of 1MB). Instead, the growth in demand was gradual, over a number of years, even while BTC had very low value compared to 2016-2017.

If we look further at the history of BTC blockchain, this growth was due in majority to financial transactions, not people storing vast amounts of arbitrary data on the blockchain. Is this due to the fact that BTC developers made it more difficult to store arbitrary data on the chain?

Perhaps that was a contributing factor, but we can look at the case of the Bitcoin Cash blockchain for more clues.

Bitcoin Cash (BCH):

BCH raised the maximum size of blocks, first to 8MB (Aug 2017) and then to 32MB (May 2018). It also raised the amount of arbitrary data that could be stored alongside a single transaction, from 80 bytes to ~220 bytes [6]. Making it much easier to store more data.

So of course the chain was immediately spammed to death, right? Wrong again!
The chart shows the daily average block sizes on BCH (green) and BTC (orange):

Figure 4 - Source: https://cash.coin.dance/blocks/size - (switch to linear)

As we can see, on average over the longer term, the blocks on BCH have not been as big as those on BTC - yet. This might have something to do with BCH having to rebuild much of its adoption from scratch, having started as a minority fork and through various phases, lost a good deal of hashpower and price in relation to BTC. BCH itself also split in Nov 2018 when Bitcoin SV forked off to create BSV. This damaging split further reduced the network effect that BCH had gradually been building, with some users / businesses going to BSV even though the majority remained with BCH.

More about BSV later, let us first have a look at another BCH chart, which shows how its transaction fees compare to those on BTC:

Figure 5 - Source: https://cash.coin.dance/blocks/fees

We can see that the relative fees on BCH were lower than those of BTC by a factor of up to several thousand. At this time of writing, BCH fees are several hundred time lower than BTC:

Figure 6 - Source: https://bitcoinfees.cash

In fact, here is the historical chart of BCH tx fees, in USD:

Figure 7 - Source: https://statocashi.info/d/000000008/transactions?orgId=1&from=1498922663073&to=1575583801102

Whoa - are those fees actually going down over time?

As we can see from those graphs, despite average fees on BCH being consistently far lower than those on BTC, the block sizes of BCH have been relatively small so far compared to BTC.

Yes - there have been times when BCH blocks up to 16 or 32MB were produced (they don't show up in the Figure 4 above because it's daily average). However, the average block size is perhaps more like 100kb so far, with only few stretches where the average sticks out above that.

In fact, BCH history is quite sufficient to debunk the myth that big blocks and low fees automatically lead to some demand for storing infinite amounts of data on a chain. Every day that BCH survives, seems to drive a further nail into the coffin of that theory.

"But what about BSV?" - You've heard some people say it has huge blocks! Let's look at that (somewhat strange) case!

Bitcoin SV (BSV):

This is where things get weird. No, seriously, you have been warned.

In 2018, the usual network upgrade of Bitcoin Cash on 15 November was not to be so smooth. A while earlier, after a period of weeks/months of arguments with the rest of the Bitcoin Cash community, a group of supporters of Craig Wright and Calvin Ayre split off with their own hashrate and consensus rules to create Bitcoin SV (BSV).

Among other things, they decided that they needed to immediate increase the allowed maximum blocksize to 128MB, and follow a roadmap which aggressively increased that to 2GB (May 2019) and they intend to completely remove the blocksize limit (among various other things) by 2020.

So you're saying...

Figure 8 - Source: [7] and https://i.redd.it/2ywehb3y86t21.jpg via [8]

Yes. By now they've changed the source code to make the number go up to 2GB. There are other limit-lifting changes except for raising the blocksize. BSV increased the data carrier size significantly, at least to 100kB, possibly more already. Their official plan has been to attract data providers to store data on their chain in order to monetize it.

Consequently, if you'd expect some larger blocks, you should not be disappointed to find some!

Figure 9 - Source: https://coin.dance/blocks/size

Wait, what's that? A log chart?
Am I trying to hoodwink you into thinking BSV blocks are smaller than they really are?
Could it be that they are larger in reality, as the myth would have us believe would be the logical consequence of "opening the floodgates" to "gigamegs" ?

No. I'm going to have to disappoint you here. If you look at the original chart, and switch to linear mode, and you'll see that average daily blocks on BSV are still pretty small - around 1.9MB at the time of writing. That's bigger than BTC's blocks, but it's not a case of "full blocks" as what the myth would have us believe.

Is it maybe because BSV folks are charging such high fees that it keeps that near-infinite external demand at bay?

That turns out not to be the case either, when we check the fees:

Figure 10 - Source: https://coin.dance/blocks/fees

BSV fees are currently the lowest of the 3 chains, yet none of that 'effectively infinite' demand pitched up to saturate the BSV chain with huge blocks.

If we look closer on what has historically caused large blocks on BSV, it's been stress tests which their community has conducted on its main network. I'm not sure it counts as "demand" when it occasionally causes an enterprise to suffer downtime and declare that they'll stop running a BSV full node.

There have been plenty of enthusiastic uploads of holiday pictures, songs and home videos by BSV supporters - some apparently with financial ties to the enterprises that created BSV and basically run the show as far as investments into other startups are concerned. It has been speculated that much of the traffic seen on the BSV network is actually generated less out of real demand by independent actors, and more to generate what looks like transaction volume but is actually data-storage such as meteorological data, air pollution data, financial market data and similar.

Let us end the review of the blockchain data of the main Bitcoin forks BTC, BCH and BSV at this point.

Perhaps we can agree that the myth of external actors filling up a bigger-block chain because of some legitimate data storage demand they have, did not materialize.

Perhaps that is because mass storage has become so inexpensive. Or maybe we just need to wait more - perhaps it will happen in 18 months ;-)

If you are so motivated, do investigate the situation on any of the numerous other Bitcoin forks which rank low in marketcap or have dropped out. You might need to do some research as these are unlikely to be known to most people. I'll hazard that you won't find any that would've lived but for dying at the hands of this myth. I've not heard of a single such instance.

Now, it is totally possible that some joker goes out after reading this article, and spamkills some penny-stock blockchain to death, just to prove that there IS something to this myth. To him/her/it I say - will it really matter to anyone if you annihilate a total shitcoin just for the sake of an argument? Try doing it to any coin that has some weight.

One more thing.

To finally put the myth to rest, with academic honors befitting.

Peter Rizun's go at busting this myth with science

Back in 2015, when the Great Bitcoin Scaling Debate was already quite heated, Peter Rizun published a paper called "A Transaction Fee Market Exists Without a Block Size Limit" [10]. It was widely circulated and discussed, at least in the community that favored on-chain scaling through (among other things) block size increases, and presented by Peter at the Scaling Bitcoin conference in Montreal [9].

It eloquently laid out both the economic and information-theoretical case against the myth we've scrutized empirically so far.

"The paper [..] shows that an unhealthy fee market — where miners are incentivized to produce arbitrarily large blocks — cannot exist since it requires communicating information at an arbitrarily fast rate." - Peter R. Rizun, in abstract of [10]

I highly encourage you to watch the video presentation on the paper if you haven't already!

You will find some other academics (J. Stolfi was one I found) criticized some of the paper's assumptions. Nevertheless, I think it does present a strong central argument, and as such is worth studying.

Finally, the empirical has the last say in any debate with the theoretical :-)

"In theory there is no difference between theory and practice. In practice there is."

Or not? You decide.

I hoped you enjoyed this article. If you have questions, some interesting additional data, or just want to quabble about the presented materials and their interpretation, we'll see each other in the comments!


References


Sponsors of btcfork
empty
empty

1
$ 4.86
$ 3.00 from @Read.Cash
$ 1.00 from @molecular
$ 0.50 from @Koush
+ 4
Avatar for btcfork
Written by
This user is who they claim to be.
We have manually verified this user via some other channel.
4 years ago

Comments

Awesome article! I've definitely learned something here.

$ 0.00
4 years ago

I have a poll @ Memo on how long a mythbusting article should be. Am interested in your feedback:

https://memo.cash/post/7f5ad26cd55e48c4b51c34f6df39cbaca4ef0ee84bbb217c8273e72f0f8d8abc

You can also leave comments here of course - any suggestions for improvement welcome!

$ 0.00
User's avatar btcfork
This user is who they claim to be.
We have manually verified this user via some other channel.
4 years ago

Thanks for another great article!

I'm not sure I fully agree with this one, though. The first part (big blocks => zero fees) is certainly busted.

But Gregs argument ("The demand for externalized-cost highly replicated external storage at price zero is effectively infinite.") doesn't say that. This is essentially the second part ("0 fees => infinite spam") and it might well be true. We just don't know.

I actually went ahead and tried to store some data on the BSV chain. I was naively going for a lot of data (the wikileaks insurance files). Well, turns out it's just not easily done technically at this point and also it's probibitively expensive: it'd clock in at around a dollar per Megabyte, so storing those wikileak files (377GB) would've cost me roughly 377,000 USD. That's not "price zero". Would I have done it at price 0? Quite possibly, to be honest. Maybe even add in some encrypted backups of my family photos and videos? Possibly, yes.

It's a moot point, though, because "0 fees" just isn't going to happen in any dependable chain imo.

So yeah: myth busted and on a side-note: BCH seems to strive to find the right balance between BTC and BSV regarding this issue.

$ 0.50
4 years ago