How Scalable is Bitcoin?

3 455
Avatar for IMightBeAPenguin
3 years ago

It's interesting to see after the whole scaling debate how far tech has come in terms of SSDs, hardware processing, and internet speeds have come. I remember being back in 2015 seeing SSDs cost almost $1,000 for a terabyte of SSD storage, and $150 only getting you a 128 GB SSD (and that too SATA SSDs). Today, Moore's Law has still held true even for the 6 years that have passed since then. What's key here is understanding that storage drives themselves don't go down by 50% in price, but rather you can get double the storage for the same cost. It seems to be following Moore's Law almost perfectly, because today you can get an 8 TB SSD for ~$800, which is an 8x increase in storage capacity, which make sense, since 3 doubling cycles have passed by in the last 6 years. I remember in 2015 having a 25-30 Mbps connection speed, and that was average at the time. 100 Mbps was considered really fast, and the higher end of internet connections. Today, I have a 300 Mbps connection, and that's the lower end of being an internet user. It has outpaced Moore's Law, and goes along the lines of Nielson's Law.

This article was inspired by a post I saw on r/btc that claimed that both BCH and BTC are crippled in scaling, and neither are viable. The assertion was that since we would need 2 GB blocks to process the same transaction throughput as Visa or Mastercard, we would need 2 GB blocks, which is not possible since the maximum that has been shown so far are 20 MB blocks. I also did notice a comment from reddit user u/fixthetracking, who explained that developments in technology would make 16 GB blocks easily achievable in the next 5-10 years. There were some other calculations in the comment itself, but it does show that the requirements, as Satoshi mentioned, "are not as prohibitive as people may think". I wanted to do a thought experiment and see how much can Bitcoin scale, and exactly how viable on-chain scaling is with today's available hardware. This will go in depth into hardware and other developments that are currently in place. I will not be discussing the software aspects of things, but mostly the hardware and internet bandwidth and speed requirements to get larger blocks. I will specifically discussing (in order):

  1. Scalability of Current Payment Methods

  2. CPU

  3. Hard Drives/Storage

  4. Internet/Bandwidth

  5. RAM

  6. Software

  7. Summary

  8. Conclusion

Scalability of Current Payment Methods

In this part, I'm going to be looking at the scalability of current payment methods, which include credit cards and other forms of payment processors. Many times, we hear that Visa or Mastercard processes 65,000 transactions per second. This is generally a figure that is thrown around to make it look like the blockchain is not scalable for payments, when in fact the figure itself is not entirely accurate. The actual number itself comes from a 'fact sheet' provided by Visa themselves that specifically states a 65,000 transaction messages per second (capacity). It's important to make this distinction because it doesn't mean that Visa itself actually handles that much throughput. This not only includes credit card transactions, but also other types of transactions such as 'cash transactions', which the fine print states. In fact, if Visa did try to handle and sustain that throughput for even a short amount of time, servers would likely crash, and not be able to keep up with the throughput. So then the question would be, how much throughput does Visa really handle on a daily basis?

It's hard to find an exact number, as most figures are a guess as to how many transactions Visa handles per day. The range is usually anywhere from 1,800 to 24,000 transactions per second, which is very broad, and doesn't give us an accurate picture as to how much throughput the network actually handles. Luckily, I decided to do my own research! There were two sources that I used to give a better idea of throughput. The first is a document by Visa themselves, which details the number of transactions Visa has processed, while the second is from this website, which graphs the statistics of how many payments each payment processors processed in 2019. If we look inside the first document, it says that there were 182 Billion payments in 2018. Specifically, the document states that:

We provide transaction processing services (primarily authorization, clearing and settlement) to our financial institution and merchant clients through VisaNet, our global processing platform. During fiscal 2018, we saw 182 billion payments and cash transactions with Visa’s brand, equating to an average of 500 million transactions a day. Of the 182 billion total transactions, 124.3 billion were processed by Visa.

So out of all of the transactions that happened, only 124 billion, or about 68% of the payments that were made were made were actually processed by Visa. This gives us a rough idea of how much throughput Visa has to deal with every day. Based on the calculations:

124 billion transactions ÷ 365 days/year ÷ 24 hours/day ÷ 60 minutes/hour ÷ 60 seconds/minute = ~3,932 transactions per second

We can assume similar scalability for other credit card processors like Mastercard and American Express. This gives us a benchmark of how scalable current systems are with today's technology - that too on a centralized network. As an equivalent, this would mean having blocksizes of around ~1 GB (give or take).

CPU Requirements

For CPU requirements, we would need to be able to have a CPU that can process at least 8,000 signature verifications per second. For some time, Core developers did say that CPU and bandwidth were the biggest bottlenecks for running "full-nodes". I was interested in exploring this topic, and came across some early entries in the Bitcoin Wikipedia regarding scalability with CPU requirements. One of the passages from the Wiki itself reads:

Bitcoin is currently able (with a couple of simple optimizations that are prototyped but not merged yet) to perform around 8000 signature verifications per second on an quad core Intel Core i7-2670QM 2.2Ghz processor. The average number of inputs per transaction is around 2, so we must halve the rate. This means 4000 tps is easily achievable CPU-wise with a single fairly mainstream CPU.

Keep in mind, this section dates back to 2012 - almost 9 years ago, and CPUs have improved significantly since then. If that isn't enough to convince you, I looked up the CPU model, and Intel released it back in 2011 (you can check the link). It is also worth mentioning that not too long ago, reddit user @ mtrycz made an article in which he benchmarks his Raspberry Pi 4, and sees how many transactions per second the CPU can process on Scalenet. The Raspberry Pi has no problem processing even 1,100 transactions per second. I was interested in other node implementations, so I looked at Flowee, and it turns out that @ TomZ had published an article specifically stating that even on their nodes, there were no issues processing 256 MB blocks on Scalenet. Specifically, to quote the article:

Flowee the Hub worked quite well, with one volunteer testing it on his Raspberry Pi, 4th generation. The 256MB blocks worked just fine, at speeds that show it can keep up without problems.

The historical relevance of us successfully syncing 256MB blocks on a raspberry pi is based on the argument some years ago that we can never move from the 1MB block size because some people find RPi support important. This shows that scaling using a bigger blocksize in tandem with hardware and software improvement is not just viable, it is proven.

I predict that we will successfully sync 2GB blocks on a (then) modern RPi in a couple of years.

So, it's not just one node implementation, and anecdotal evidence that a Raspberry Pi today can handle 1,100 transactions per second, but this has been well observed AND proven to be the case. At the bare minimum, with the worst hardware possible, we can process at least 1/4th of Visa's throughput, which they likely spend tens or even hundreds of millions on servers to maintain. So then the important question would be what is achievable with mainstream or maybe even higher end hardware?

Interestingly enough, we have benchmarks on those too... Jonathan Toomim, an independent Bitcoin developer, already ran a benchmark which showed BCH being able to benchmark 3,000 transactions per second per core. Keep in mind that even this CPU isn't all that high-end, and was released back in 2017, and this was processing near Visa level capacity for every core in the CPU, which was forwarding transactions to other cores. To add, according to Toomim, a lot of the time was taken up because of the transaction generation rather than transactions being validated or blocks even being propagated. This means even a fairly mid range CPU could beat the transaction processing of Visa itself on a single core! So in the absolute worst case scenario, Bitcoin would be able to process thousands, if not, tens of thousands of transactions per second (on CPU), beating Visa by a landslide on home hardware.

This got me more interested in CPU requirements if users want to verify signatures and transactions, so it made me think... How do we know how many transactions per second a given CPU can process based on the specs? This actually was perfect timing for such a question because I am currently learning about Python, and the topics of concurrency and parallelism. I have next to no knowledge, but based on what I know, clock speed, and the number of cores are likely to be the main contributing factor to how many transactions can be processed on a CPU. So, I decided to dig deeper, and look for answers, until I came across a very useful thread on stackexchange, which summarized the parameters needed to calculate ECDSA signature verification:

According to the SUPERCOP measurements, an Intel Xeon E3-1220v6 ("Kaby Lake", roughly comparable to a low end 7000 series i7) with 4 cores at 3 GHz achieves 311689 cycles for one verification of an P-256 ECDSA signature

So, from this, we know from this information that it takes ~312,000 cycles (let's just round it) to verify an ECDSA signature. I would assume that the cycles required for signature verification aren't inherent, so other CPUs should also take roughly the same number of cycles to verify a given signature. What's interesting is that this number gets even better as we look at the developments in CPU algorithms for both individual, AND batched signature verification. According to this paper written back in 2011, ECDSA signature verifications can be verified at 273,364 cycles per individual signature verification. So this number is within the same ballpark range, but according to the paper, even faster verification can be achieved by processing signatures in batches, where multiple signatures (64) are verified together. This cuts the verification time for individual signatures by ~51%, requiring only 134,000 cycles for a single signature verification.

I was curious as to whether we will ever be able to utilize batch verification, so I decided to do some research on the issue. Apparently, for ECDSA signatures, they cannot be verified in batches. At least, according to this article I read, which states that:

With ECDSA every signature has to be verified separately. Meaning that if we have 1000 signatures in the block we will need to compute 1000 inversions and 2000 point multiplications. In total ~3000 heavy operations.

So, in this case, we can't take advantage of batch verification (if I understand correctly). This means we can assume 273,364 cycles per signature. With this information, we can calculate the number of transactions Bitcoin/BCH can process based on CPU clock speed, and the number of cores, but we will have to make the following assumptions to get a realistic idea of how many transactions per second a given CPU can process:

- Windows 10 requires a minimum of 1 GHz clock speed to run on a CPU, so this will be subtracted from the computing power

- We will assume 75% utilization so that there's headroom for the CPU to accommodate other factors that might potentially use up power

With these in mind, we get the following results (these are many present day CPUs):

So, as we can see... Achieving even the peak possible throughput of Visa is 100% doable on a mid-high end CPU. Even a Raspberry Pi 4 would technically be able to handle ~1.6 GB blocks if everything was properly optimized. These figures also tend to be in line (or at least roughly within the range of) with what tests have shown for CPU signature verification benchmarking, however it can vary from CPU to CPU based on other specs too. I am also planning to make an app for this (which I will put on GitHub once I understand how GitHub works, lol). It is also worth noting that Peter Rizun is working on cash drives (still technically an idea rather than in production), which will enable effectively unlimited throughput for next to 0 cost.

HDD/SSD Storage

Currently, there are two types of drives on the market: HDDs, and SSDs. HDDs (otherwise known as "hard drives") use moving parts to read and write data, and therefore are more vulnerable to data corruption, are much slower, and less reliable. The tradeoff is that they're relatively cheap in terms of cost. 1 terabyte of storage today is only going to run you ~$25 USD tops. SSDs (short for "solid state drives") don't have moving parts, and therefore are less vulnerable to data getting corrupted, much faster, and very reliable. Of course, the tradeoff is that they are much more expensive. Today, an SSD will cost at least $100 USD for a terabyte of storage (give or take). While HDD prices have remained somewhat stagnant (slowly declining), SSD prices look to be more promising and in my opinion, are likely to replace HDDs (which will likely become obsolete) in the near future when SSDs start becoming very cheap.

Since HDDs are fairly slow, they only have one interface, which is SATA. HDDs today use SATA III, which is capped at 6 Gbps, or 750 MB/s. In reality, HDDs are only able to run at 1/5th of this speed because they're inherently limited due to the outdated technology. A standard HDD today operates at ~150 MB/s read and write, which (purely based on read and write speeds) is technically enough to accommodate 90 GB blocks. At first, it may seem like this is a non-issue, but in reality, the bottleneck for HDDs isn't transfer speed. It's latency, which is time required for the head of the reader of the drive to be able to actually read or write data. The latency

Most standard HDDs come with a rotational speed of 7,200 RPM or 120 Hz. This means an average of 8.333... (call it 8 for simplicity) milliseconds for a cycle or rotation. This means that on average, it takes ~4 milliseconds (a half rotation) for the head to move to the desired location to store/alter data. In this case, read and write speeds would be fast enough that they can be ignored. I was interested in how this would affect download with a node that is trying to sync with the network or is running in sync with the network, so I had to look more into this. For HDDs, there is both a seek time, and the rotational latency (as explained above). So in total, we can expect a delay of at least 4 milliseconds when reading from, or writing to a file.

Files are stored in clusters, and are often fragmented. This means that for the blockchain, which would be ~250 GB, the actual time it would take would be much longer than the cumulative file size divided by the read/write speed. If we're purely looking at the HDD as the bottleneck for syncing, it will take half an hour at best, but several days at worst. The current cost of HDD space is pretty much next to none, each gigabyte costing roughly 2 pennies. So, with the largest drives commercially available today, we would technically be able to store 32 MB blocks for the next decade, and even Gigabyte blocks (like Visa) for 4 months.

Current storage density is likely about 1 terabit or 125 gigabytes per square inch. What's even more exciting is that HDD manufacturers have already come up with a new technology called HAMR (short for "Heat Assisted Magnetic Recording"), and BPM (short for "Bit Patterned Media"), which together could easily achieve an areal density of 20 terabits, or 2.5 terabytes per square inch well within a decade from now. In fact, there's a whole article on Seagate's roadmap to 120 TB drives by 2030! The pictures below give a good idea of what's to come in the near future:

I also wanted to add that I made a table/spreadsheet on how many years worth of n blocksizes a 120 TB HDD can hold:

From the looks of it, HDDs will have no issues with storage!

SSDs in comparison to HDDs are very fast, so they have two different working interfaces. The first one is the same SATA III. It's once again capped at the same 6 Gbps, or 750 MB/s, but since SSDs are faster, they can at least utilize 500-750 MB/s of the 750 MB/s that SATA III is designed to handle. The latency of SSDs is next to none (at least in comparison to HDDs), so in this case, the blockchain could likely sync at read/write speeds without any issues. The good news is that SSDs are also getting much cheaper (and at a faster rate) than HDDs. In my opinion, I think this will be the new technology to replace HDDs.

This brings me to the second interface SSDs have, which is PCIe, which is much more interesting than SATA because there isn't exactly a hard cap on transfer speeds... At least, not in the same way SATA has one. To summarize it, with PCIe, there are two 'parameters' for a PCIe SSD which is the actual generation of the SSD itself, and the number of lanes that the "bus" occupies for transferring data. With each generation, the possible throughput of a given lane doubles, and for every 2^x lanes, there is an additional doubling of data throughput. If this is a little hard to understand, this article from Wikipedia explains it well, along with this table I made (it's the same one from the article, but with a few modifications):

The 'x' numbers indicate the number of lanes for transferring

So even with today's tech, the speed of SSDs are more than enough to be able to actually handle gigabyte blocks without any issues. If we were purely using SSDs for 1 GB blocks, it would cost $8,000 in storage per year, today. So, obviously we're not going to have 1 GB blocks today, but if we were to, the storage cost wouldn't be extremely expensive even if we chose to put absolutely everything on SSD and not prune the data at all. With this knowledge, we can say that storage is unlikely to actually be an issue.

Internet/Bandwidth

Internet speed and bandwidth usage are things I've had a little bit of difficulty measuring with any conclusive answers. Earlier, I thought we could calculate internet speed requirements for upload by multiplying the "download" rate required for blocks by the number of upload peers and hops required to fully propagate the block. This would mean that an average user's internet connection (upload speed) would only be enough to propagate 128 MB blocks. I got the information from this website, which calculates the requirements of bandwidth for block propagation. It turns out that this is extremely outdated, and assumes "legacy" relaying of information across the network. I was curious on actual requirements for block propagation, so I asked a few people who did have knowledge on the issue. It turns out that the actual requirements for upload were much less of a bottleneck than I initially thought.

So, not too long back, I asked about what the bandwidth requirements for Scalenet on BCH would be. According to this comment by @ jtoomim, the bandwidth requirements are actually much less than the specs given on the website above. Specifically:

256 MB over 600 sec is 0.42 MB/s or 3.4 Mbps. In practice, actual traffic is several times higher than this due to protocol overhead and the bandwidth requirements of sending/receiving inv messages to/from all of your peers, so actual usage is likely to be around 20 Mbps in each direction when spamming is going on. Otherwise, you can expect less than 0.1 Mbps of traffic on scalenet.

I got another helpful answer from reddit user @ Pablo Picasho, who came up with this answer:

Sending 256MB (megabytes) of tx traffic over 10 minute average block time would require ~3.5Mbps (note, mega_bits_ per second) upstream for each peer you'd want to send all of them to. Number of peers is configurable, but let's assume you have 8 and send to about half, that makes 4x3.5Mbps = 14Mbps upstream.

One needs to add some extra for transmitting the blocks themselves, so let's double the above figure, or 28Mpbs. In practice, depending on node client, peers and implemented block transmission protocol, there could be substantial savings on the block transmission, but I think the ballpark is right ... I conclude you'd need quite a hefty upstream to be a productive player and not just a drag on the network.

Both numbers seem to be roughly the same, so I think using the calculation mentioned in the comment above makes sense, given that it will likely give us a good guess of what bandwidth requirements might be. With the current average connection speed world wide, users on nodes could technically run ~512 MB blocks according to the bandwidth requirements. So upload isn't all that much of a bottleneck or limiting factor. Hypothetically, if we wanted to run Visa level throughput on a node, it would only require roughly a 100 mbps connection speed, which is only about twice as fast as an average person's broadband upload. That's very conservative when you take into account that this is with today's tech, and with the scalability of a global payment network.

Another factor to take into account is actual bandwidth usage. Assuming the average user's bandwidth, the actual usage would be pretty high. About 18.4 terabytes in bandwidth usage per month. Most internet plans have unlimited data, but the exceptions that don't usually have a 1 TB cap. "Technically", someone can choose just not to upload as many transactions and only download (or have less peers), which will cut down bandwidth usage.

Note: TCP/IP does seem to be a bottleneck, but it turns out that there are better protocols that can solve the current issues with it. I don't have much knowledge on this subject, so I will choose not to comment on it.

RAM/Memory

For RAM, I am not as technically knowledgeable on the topic, but from my observations, RAM usage on my BCHN node tends to be exactly 4 times larger than that of the size of the mempool. I'm not sure for the actual usage of memory and how it might work on Bitcoin, but from other observations, this pattern also seems to hold true on Scalenet with larger blocks. Assuming this holds true, RAM wouldn't be much of an issue for running a node. Even for something like Gigabyte blocks. A Raspberry Pi could still technically handle them, though it would be a little bit painful for the system to handle.

Software

Currently, BCH is artificially bottlenecked by the blocksize cap. It's user configurable, but in practice, it still works similar to a normal hardcoded limit. The reason BCH doesn't have a hard cap is so that devs don't have power over it. According to some of the BCH devs, the actual amount that the software itself can process is much higher. Somewhere in the range of 128-256 MB without any issues, and just a few optimizations. If this is the case, then the software itself is actually limiting the potential scalability of the network. The hardware (from the looks of it) doesn't seem to be much of a concern. Especially when a Raspberry Pi today seems to have enough power to process Visa-level throughput (granted, without much headroom). Storage can become somewhat of a concern, but users are always free to prune data, so additional storage costs are only optional.

Summary

With the current hardware (mid-range) available today:

  • A CPU can process many times the throughput of Visa or Mastercard (with Raspberry Pis technically being able to handle as much throughput as Visa, or a little more)

  • Average/below average RAM can likely handle gigabyte blocks with headroom

  • Storage isn't much of an issue because of pruning, but even for a few hundred, storage can be adequate for ~256 MB blocks for an entire year. With HAMR and other developments in storage within the next decade, storage could become dirt cheap. SSDs are going to drop in price too, and could potentially replace HDDs if the technology starts to really improve.

  • Average bandwidth speeds are technically enough to handle blocks of several hundreds of megabytes

What could be limiting scalability or potential scalability:

  • Software has capped blocksize, and other software optimizations need to be made (such as removing CPFP, and the chained limit). It's possible that the software itself could have other 'bottlenecks', but I don't have enough knowledge in this area to make an informed opinion

  • TCP/IP is a bottleneck, which can be changed so that users and miners running nodes can relay more transactions, and larger blocks

  • HDDs might potentially not allow for larger blocks because of a very high latency, but I'm unsure to what degree

From what I've learned so far, it seems that we technically have the hardware (even for an average user) to handle blocks that are hundreds of megabytes in size. Gigabyte blocks also seem relatively doable, but would be slightly expensive, requiring better than average hardware, and a lot more storage if a user is choosing not to prune. I would like to add that I'm not someone with a lot of knowledge in computer science or even computer engineering. I'm just someone enthusiastic about Bitcoin (Cash), so what I could be writing in this article could very well be wrong. I've done a lot of research to the best of my ability, so this is just a collection of what I know so far. I'm currently in the first year of my Computer Science degree, so when I know more about Bitcoin, I will make sure to write an updated version of this article, or update this article itself.

Conclusion

From the looks of it, average hardware today can process close to Visa level throughput (maybe a little bit less) while still remaining a peer-to-peer decentralized network that most people can afford to run a node on. It really isn't that far-fetched to have gigabyte blocks right now, although they aren't necessary. I think Satoshi's view on the scalability of Bitcoin encompasses how many of us feel about it today:

The existing Visa credit card network processes about 15 million Internet purchases per day worldwide.  Bitcoin can already scale much larger than that with existing hardware for a fraction of the cost.  It never really hits a scale ceiling.  If you're interested, I can go over the ways it would cope with extreme size.

22
$ 43.35
$ 32.00 from @Ellie
$ 7.36 from @TheRandomRewarder
$ 1.00 from @ErdoganTalk
+ 11
Avatar for IMightBeAPenguin
3 years ago

Comments

This was a very interesting read because I often think about this myself. If I was to start listing all of the new pioneering technologies that were called out for being unscalable or too resource-consuming, tech developments which are now pretty much mainstream by now like the personal computer, I would be here all day.

The truth is I have been hearing about Moore's Law slowing down to a halt for as long as I can remember, and each time something happens that really pumps the numbers again and reveals the law to hold its integrity.

Each time this argument of problems with scaling, energy consumption, or simply tech limits come into play in my discussions, I can't help but wonder what do these people need to see in order to understand tech evolves at mind-boggling speeds.

I mean, it's not a new thing we're seeing, this process of people calling tech limits has gone as far back in time as the middle-ages, and people still don't get it.

What can you do...

$ 0.00
3 years ago

Reddit user, readcash user, NFT cucumber salesman.

I enjoyed this article.

$ 0.00
3 years ago

Thank you :)

$ 0.00
3 years ago