Join 54,701 users and earn money for participation

read.cash is a platform where you could earn money (total earned by users so far: $ 240,192.22).
You could get tips for writing articles and comments, which are paid in Bitcoin Cash (BCH) cryptocurrency,
which can be spent on the Internet or converted to your local money.

The Bitcoin Cash community is currently debating how to improve our Difficulty Adjustment Algorithm (DAA). Mark Lundeberg wrote a paper (DA-ASERT) which proposes a solution. Jonathan Toomim has implemented this and is now being considered for BCH.

The purpose of this article is just to give a thumbnail sketch of how this DAA works on a very basic level.

All DAAs operate by examining how long it is taking for blocks to be found, and then adjusting the difficulty up or down. The original DAA in Bitcoin (BTC) is very simplistic and only adjusts every 2016 blocks.

The situation in Bitcoin Cash is more complex since we have miners switching back and forth between multiple chains. In addition we are a minority SHA-256 chain. Because of these factors, we need to be able to adjust much faster than BTC does.

Bitcoin Cash currently has a DAA (also known as cw-144), that calculates a new difficulty every block, and essentially is just using an average of the last 144 blocks.

But there still have been issues. For example, the BCH/BTC ratio would rise slightly, and switch miners would come over and mine turbo blocks on BCH. The DAA would take many hours to slow it down. After it was finally slow enough, all those switch miners would leave, creating a hash "vacuum" and then we would get really slow blocks.

This back-and-forth between fast blocks and slow blocks is referred to as "Oscillation".

In designing cw-144, the basic issue was a trade-off between adjusting too fast and adjusting too slowly.

It's not good to adjust too fast, because there is a natural variance in block times. Outlier blocks (a single block with a particularly slow or fast time) shouldn't affect difficulty too much because these are expected on occasion. Thus, if you adjust too much too quickly, the difficulty won't be accurate, and it will create unnecessary oscillations.

It's also not good to adjust too slowly either, because that also results in oscillations whereby the difficulty actually should have been adjusted sooner due to the behavior of miners, but wasn't.

The main idea behind ASERT is that instead of using a moving average (like the last 144 blocks), we're only looking at the most recent block. At first glance, this appears like it would be much too responsive. In other words, it would adjust too much too quickly and would "bounce around" in an unstable matter.

But there's a twist: We also add a "relaxation parameter" so that the adjustment on each block is always going to be a small adjustment. Let's take a closer look and hopefully the idea will become clear to you.

Here is the first formula given in Mark's paper:

Let's break this down:

exp() is just a fancy way to write "e to the power of whatever is inside the parenthesis", where e is Euler's constant (2.7182818~). As a side note, e is a fascinating number and pops up seemingly everywhere. Sometimes it is described as "the natural rate of change". You can watch an interesting video about that here.

T is the target of 600 seconds, and 𝜏 (Greek letter "tau") can be "anywhere from (say) 50T to 1000T" according to the paper. Let's pick a value of 100,000 seconds.

So, let's imagine the last block was found in 600 seconds and now suddenly there's a block that's found in only 100 seconds. What would happen?

First we take 100 and subtract 600, which gives us -500. Then we subtract T, which is also 600, to get -1100. Now we divide that by 100,000 and we get -0.011. And finally, we take exp(-0.011) and get 0.989~.

Let's take another example, this time with a particularly slow block. Assume the prior block took 600 seconds and the current one took 3600 seconds. First we calculate: 3600 - 600 - 600 = 2400. Then, 2400/100,000 = 0.024. And finally, exp(0.024) = 1.02492~.

In the first example, when blocks were too fast, the algorithm told us to multiply the previous block's target by .989, which would lower the target by about 1%, and thus raise the difficulty by about 1%. The opposite occurs when blocks get too slow: The target increases by about 1% and the difficulty goes down by about 1%.

And this is why ASERT is so simple and so genius. It basically just adjusts each block by a small amount. If the previous block was an outlier, then it doesn't matter because the adjustment is tiny. And if there is a real change in miner behavior, the algorithm will keep making small adjustments, one block at a time, to get us back to normality within a relatively short duration. Oscillations can never be avoided completely, but the data appears to show that this type of DAA can be effective in minimizing them.

how can you post so little for free, how much money we can't do now if you are very good, but if you could help me a little, it would be nice if you could help me by gifting me some coins.

The main oscillation problem with cw-144 is not due to the window size, but because old blocks dropping out of the window have the same impact as new blocks entering the window. Hence any hashrate changes impact the difficulty again some 24h later, which then affects mining profitability and thus causes a hashrate change again, thereby reinforcing any oscillations in hashrate and difficulty.

Note that while ASERT does not use a moving average, it very closely approximates an exponential moving average, known as WTEMA. The mathematical relation between the two is indicated in Mark's paper.

That's rather beautiful! If jtoomim hadn't provided code and simulation results I would've been very sceptical towards this algorithms performance. Very skeptical.

Isn't this beautiful? Such little code and such little historic data necessary! I fancy if satoshi had had this idea, he might've actually used it.

On a side-note: cluttering this beautiful piece of code (which every SPV wallet will have to imlement) with unnecessary tables of constants for some approximation function and add some feature on top nobody has asked for is ... ? I can't even find a word for this kind of behaviour. Despicable?

how can you post so little for free, how much money we can't do now if you are very good, but if you could help me a little, it would be nice if you could help me by gifting me some coins.