BCH upgrade proposal: Use ASERT as the new DAA

74 4792
Avatar for jtoomim
3 years ago

Abstract

The current BCH difficulty adjustment algorithm (DAA) allows for a switch miner to extract a large amount of profit from constant-hashrate miners. This strongly discourages constant-hashrate mining and encourages switch mining, and causes oscillations in difficulty and hashrate. In my test simulation, the stolen profit is on the order of 6% lost revenue by constant hashrate miners for the current DAA, but only about 0.35% to 0.65% for the EMA family of DAAs. I propose a specific type of EMA, called ASERT, for BCH to be activated in the November 2020 scheduled protocol upgrade, and provide extensive simulation results on this algorithm, as well as an untested sample C++ implementation.


Why the DAA is a problem

The current BCH difficulty adjustment algorithm — known as cw-144, since the algorithm is based on chainwork — has a problem: it is prone to oscillation. This oscillation is the result of the current BCH DAA using a simple moving average, and has been described in detail in previous works. It was a known issue with cw-144 even in October, 2017, before it was activated on mainnet.

(If you're already familiar with this oscillation, such as if you watched my Feb 27, 2020 video, feel free to skip this section and jump to the Stable DAAs heading.)

There are a few reasons why this bug, and the incentive problem it engenders, are problematic:

  1. It increases average transaction confirmation times by a factor of about 2x if 90% of the hashrate behaves in a rational manner;

  2. It causes the chain to stall completely if 100% of the hashrate behaves in a rational manner;

  3. It allows for some really nasty selfish mining attacks. Selfish mining attacks cause block reorgs, and reduce public confidence in the block chain, and make double-spend attacks more feasible;

  4. It punishes miners for being loyal to BCH;

  5. It punishes small solo miners who do not have the technological sophistication to automate hashrate switching strategies; and

  6. It allows for malicious strategies in which a miner can intentionally amplify the oscillations and pro-switching incentives, either by switching their hashrate themselves in a pumped fashion, or by manipulating the timestamps of the blocks they mine.

In brief, the oscillation is due to cw-144 using a simple moving average (SMA) over the last 144 blocks to estimate hashrate. When e.g. a high-hashrate block leaves the SMA window, it reduces the difficulty by the same magnitude as a new block entering the window would increase it. Blocks leaving the window therefore sharply change the difficulty in a direction that incentivizes miners to repeat the hashrate of the departing block. I call this effect "hashrate echoes." Notably, these echoes do not require any malicious miner behavior to occur. They are merely the result of miners following the direct short-term incentives that the DAA gives them.

These hashrate oscillations have a period of about 1.05 days, so they're not visible on most difficulty or hashrate charts (which use a 1-day sample interval), but they're clearly visible on this interactive chart.

These hashrate echoes first appeared in BCH shortly after the cw-144 DAA was activated on November 2017. At that time, the oscillation amplitude was about 1 EH/s, with daily peak hashrate typically at 2x the daily minimum:

Due to changes in the behavior of a few large BCH mining pools, these echoes became significantly more severe around October 1st, 2019, prompting an article from BitMEX Research. The oscillation amplitude reached 11 EH/s, with daily peaks at 4x to 8x the daily minimum. Similar extremes were observed around January 20, 2020 and most of April, 2020.

These oscillations in hashrate cause and are caused by changes in mining difficulty. These difficulty oscillations frequently exceed 10% in amplitude, causing dedicated miners to mine at a loss (compared to mining BTC) most of the time, with switch miners swooping in for around 2-6 hours per day to capture all of the profitable blocks.

These oscillations significantly worsen the user experience. The high-hashrate bursts are short; the hashrate droughts in between are long. Most transactions are issued during hashrate droughts, and the average transaction needs to wait much longer than the ideal 600 seconds for the normal Poisson block distribution. This effect can be seen in this interactive chart of confirmation times. Confirmation times have also gotten worse recently, averaging about 15 minutes since September, 2019, and peaking at 28 minutes in January, 2020.

In Sep-Oct 2017, kyuupichan and a few other developers wrote the "difficulty" software suite to simulate these difficulty oscillations and hashrate switching with different DAAs. In Feb 2020, I extended this suite with a graphing browser-based UI (code; slow online version). These simulations confirmed that with the cw-144 algorithm, these oscillations are produced without any malicious miner behavior. All that is necessary to generate them with cw-144 is for miners to mine whichever coin (BCH or BTC) is the most profitable at any given moment.

The simulations also prove that the oscillations are specifically cw-*'s fault. The oscillations are present in all variants of the cw family when miners follow rational incentives, with larger and more frequent oscillations for cw variants with higher sensitivity (e.g. cw-144 is worse than cw-576), but the oscillations are entirely absent from all other good DAA families, regardless of sensitivity.

The fact that it's possible for switch miners to earn 5-20% more than constant-HR miners is a bug. This bug is the result of the 1-day resonance and tendency for oscillation that the current DAA has. We can reduce the incentive for switch mining by about 90%-95% by switching to a different DAA, such as an EMA-family algorithm.

Stable DAAs

The instability in the simple-moving-average (SMA)-based cw-144 algorithm is caused by the influence of a block on difficulty ending abruptly when the block leaves the 144-block window. This instability can be avoided by having the influence of a block fade slowly over time. For example, if the influence fades linearly over 144 blocks, the instability disappears. This is what the lwma (linearly-weighted moving average) algorithm does, as well as Tom Harding's related wt (weighted time) algorithm. Another option is to have the influence fade exponentially over time, with e.g. a 144-block time constant. This is what the EMA (exponential moving average) family of algorithms do. This family includes many variants, such as Jacob Eliosoff's ema-1d and simpexp-1d, Tom Harding's wtema, and Mark Lundeberg's asert algorithms.

Among these stable DAAs, we identified four major candidate algorithms: asert, wtema, lwma, and wt. All four algorithms performed well in my simulations and in zawy12's simulations. Below is a table of data on one such simulation run for all four candidate algorithms, plus cw for reference, at three different responsiveness settings corresponding to 1 day (as in the current DAA, cw-144), 2 days, or 4 days. The table includes the average confirmation time for transactions in the third column. That number should ideally be as low as possible. In the "Greedy" through "Steady" columns, the table shows the relative profitability of various mining strategies (compared to mining BTC). In the final column, the table shows the advantage to be gained by switching from the worst mining strategy to the best mining strategy. That number should ideally be as close to 0% as possible.

The current DAA, cw-144, performs the worst of all algorithms tested by a large margin. The best DAA in this test, asert-576 only took 51% as long to confirm the average transaction, and reduced the incentive for switch mining by 94.4% compared to cw-144. All algorithms except cw clustered closely together in performance at a given responsiveness setting. The performance advantage for the longest responsiveness (576 blocks, 4 days) was modest compared to the middle setting.

Differences in hashrate regulation performance between these four options in simulations were minor. There seemed to often be a small edge for the EMA based algorithms (wtema and asert), but the advantages were small enough that we believed that the choice should be made on other concerns, like implementation complexity. Both lwma and wt require sampling every block header in their windows (e.g. 288 blocks) in order to calculate the next block's difficulty, whereas the EMA family generally only requires sampling two block headers, so our attention focused on the EMA family.

Our attention then focused on determining which EMA algorithm to use. The most interesting ones seemed to be wtema and asert.

WTEMA and ASERT

The two EMAs that seemed like the best candidates were wtema and asert. The wtema algorithm was created by dgenr8 (Tom Harding) on or before Jan 1st, 2018. It is an exceptionally simple integer-only EMA based on the following pseudocode:

next_target = prior_target / (IDEAL_BLOCK_TIME * alpha_recip)
next_target *= block_time + IDEAL_BLOCK_TIME * (alpha_recip - 1)

In abstract terms, it first estimates how much the block's "target" (the reciprocal of difficulty) should be changed in order to achieve 600 second blocks if the most recent block interval were perfectly representative of the hashrate. Then, it performs a weighted average between that value and the last block's target, using 1/alpha_recip as the weighting coefficient. This makes wtema a recursively defined function, and a classic example of a simple first-order IIR filter, equivalent to an electronic RC filter. But don't be fooled by the simplicity: it performs just as well as a complex PID controller system, but with a fraction of the complexity.

The asert algorithm was developed mostly by Mark Lundeberg on or before February, 2020, though an equivalent formula was also independently discovered in 2018 by Jacob Eliosoff but without Mark Lundeberg's detailed analysis, and again in 2020 by Werner et. al at Imperial College London. asert is defined by the following formulae:

target_timestamp = (height + 1 - activation_height)*IDEAL_BLOCK_TIME
next_target = target_at_activation * 2^((last_timestamp − target_timestamp)/tau)

Note: ^ denotes floating point exponentiation, not integer XOR.

asert stands for absolutely scheduled exponentially rising targets. In abstract terms, asert works by defining an ideal block schedule, and setting each block's difficulty exponentially, based on how far ahead of or behind that schedule its parent block was. For every tau seconds ahead of schedule a block is, the next block's difficulty goes up by a factor of 2. Reasonable values of tau can be from a few hours to a few weeks. Like wtema, asert's code and math are simple, and only depend on two blocks. Unlike wtema, asert depends only on the most recent block and the genesis block or the fork block at which asert was activated. This avoids the recursive definition that wtema has. For more information on asert, I highly recommend Mark Lundeberg's succinct introductory paper.

The two algorithms are functionally nearly identical when block intervals are reasonably close to normal, and only differ at the extremes.

wtema and asert are approximately equivalent for small block solvetimes, but diverge for longer solvetimes. In both directions, asert has superior performance. There is a problematic x-axis crossing in with large negative solvetimes with wtema but not asert. Large positive solvetimes result in asert reducing difficulty more responsively. Image credit: dgenr8

The debate between the two algorithms seemed to center around four issues:

  1. Integer approximations — easier with wtema than asert, due to asert's use of an explicit exponential function, but possible in both

  2. Singularity and negative solvetimes — problematic with wtema, but not asert , as shown in the graph above (source: dgenr8). wtema likely needs rules restricting solvetimes to be greater than some value (e.g. > -0.5 * 144 * 600 sec for wtema-144)

  3. Mathematical and theoretical elegance — good with both, but better for asert

  4. Accumulation of rounding/approximation error — modest in wtema, but absent in asert due to its absolute nature

The main issues seemed to be #1 and #2. The need for integer approximations of 2^x makes asert more complex. The need to protect against large negative solvetimes makes wtema more complex. Ultimately, we decided that it would be better to add a few lines of code to the difficulty calculation function itself than to add or change other consensus rules in order to forbid large negative solvetimes, so we settled on asert.

It is important to not include floating point code in Bitcoin consensus rules, since C and C++ are generally not IEEE 754-compliant, and different compilers and hardware can disagree on floating-point computations. So we need an integer approximation. We then began looking at different approximation implementations.

The method I chose relies uses the 2^x = 2 * 2^(x-1) identity to shift the 2^x approximation to the 0 <= x < 1 range. Once this has been done, asert becomes remarkably insensitive to any residual error, and even an approximation as simple as 2^x ~= 1 + x works pretty well. But we could do much better than that, so we did. After testing a few approaches, I eventually choose to use Mark Lundeberg's cubic approximation of 2^x for 0 <= x < 1. This approximation maintains an error margin below 0.013%.

The error for Mark Lundeberg's 2^x approximation is between -0.010% and+0.013%.

Here is an as-yet untested example C++ implementation of the aserti3 algorithm, and here is a well-tested python3 implementation of aserti3. Performance mirrors that of true asert very closely. Testing of the C++ implementation should follow soon.

The main competing approximation option was Jacob Eliosoff's exp_int_approx() implementation, which uses an algorithm grabbed from Stack Overflow to achieve arbitrary precision, but at the cost of the use of a for loop and more opaque math and code.

I chose a half-life of 2 days (tau = 2*24*60*60, equivalent to a responsiveness setting of 2*144/ln(2) or aserti3-415.5 due to a difference in exponential bases) because that seems within the range of values that perform near-optimally in simulation while also being easy for humans to understand and discuss — for every 2 days ahead of schedule a block's timestamp gets, difficulty doubles. I consider a half-life of 1 day (aserti3-207.75) to also be acceptable. I recommend against going lower than 207.75 or higher than 576.

Activation

Because the asert calculation depends on the fork block's height and difficulty, implementation is substantially simpler and easier if activation is on a predetermined block height, rather than at a predetermined median time past (MTP). If the latter approach is used, then post-fork the new asert code will need to search the blockchain for the first block whose MTP exceeded the fork time, and extract the height and difficulty from that block's header. In contrast, if a fork height is used, the asert algorithm just needs to search the blockchain for the difficulty at the known height — a considerably simpler operation. While an MTP-based activation of asert is possible, it is my belief that it will be simpler and better overall to do this activation based on block height instead.

Bitcoin ABC has a tradition of adding a poison pill to their code in advance to fork off obsolete versions of their software at a predetermined MTP. If a fork height is used to activate asert, then the poison pill activation and the asert activation will not coincide exactly. This will likely result in two technical hard forks in quick succession, rather than a single three-way fork. This is a minor disadvantage to using a fork height activation. But even if the poison pill does not coincide perfectly, it will still serve its purpose: It will discourage individuals from continuing to run obsolete software.

I believe that the (potentially permanently) simpler code of a height-based activation of asert outweighs the timing mismatch issue, so I used a height-based activation in my sample C++ implementation. I am okay with this being changed if a majority of the BCH development community disagrees.

Testnet

The standard 20-minute testnet rule applies: A block may be mined with difficulty 1 if its timestamp is at least 20 minutes ahead of the previous block's. Because asert difficulty adjustments are based on time rather than block height, the "timewarp" technique to reset difficulty on testnet (by jumping the timestamp forward repeatedly to lower the average chainwork over the last difficulty adjustment interval) does not work. Because of this, testnet difficulties will be slower to adjust after this change than before, and testnet users may be stuck with 20 minute blocks for a few days after an ASIC miner leaves testnet. This could be addressed by reducing the tau value on testnet, but the cost of this change would be that testnet would less accurately resemble mainnet. Developers who use testnet frequently should discuss which approach they prefer.

Attacks

We have looked at a few different types of attack against aserti3, and so far, with existing consensus rules and non-ABC/BCHN node policy, aserti3 outperforms cw-144 in almost all scenarios. There are only a few that seem concerning. All attacks that have been identified so far are small in magnitude (usually smaller than other known and existing attack vectors) and/or easily mitigated with small node policy changes.

The attack scenarios of significant concern are all ones that the current DAA protects against by using a median-of-three prefilter. The current DAA will use the median timestamp of the most recent three blocks as well as the median timestamp of the blocks 144, 145, and 146 blocks ago. This median-of-three prefilter effectively adds 1 block of delay to the DAA in most scenarios, which prevents difficulty from being exploited in a 2-block reorg attack, such as are used in most known selfish mining strategies. However, this extra block of delay also tends to cause oscillation issues in most DAAs that aren't already oscillatory, including EMAs, so zawy12 and I strongly advise against the use of median prefilters. This leaves two types of 2-block attacks as needing special attention.

The bottom graph shows difficulty regulation performance for a normal EMA DAA, like wtema. The top graph shows the same DAA, but with a 1-block delay added. This delay creates an instability and oscillation with a 2-block period, and is a disaster. Image credit: zawy12

The first attack is selfish mining in the absence of oscillations. In this scenario, a selfish miner A can make a block A1 with an artificially early timestamp — say, a solvetime of 1 second — which increases the difficulty of the next block A2 by 0.24%. This can cause chain A2 to have slightly more chainwork than B2, and can make nodes to prefer A2 over B2 even if B2 was seen first. Bitcoin ABC and BCHN already have node policy rules that block this attack, but Bitcoin Unlimited and most other nodes do not. To address this issue, we recommend that all remaining nodes add hysteresis to the first-seen rule for blocks: nodes should only switch chains if the new chain's work exceeds the first's by some substantial threshold, like 10% or 50% of one block's chainwork.

The second attack is the FTL (future time limit) attack. In this attack, a miner creates a block with the timestamp set to the FTL (currently 7200 seconds in the future according to most BCH nodes' policies) to lower the difficulty on a secret chain, then mines as many blocks as possible until the difficulty on that chain returns to normal. This attack can be slightly more effective for asert than for cw in some (but not all) circumstances because asert's difficulty responds after a single block rather than two for cw. This effect is largely mitigated by the use of a slower responsiveness setting (aserti3-2d is about 1/4 as responsive as cw-144), but it is better addressed by simply lowering the FTL.

Lowering FTL appears to be a straightforward improvement. Many altcoins have lowered FTL with no observed negative consequences. In the modern world of NTP and GPS, keeping system clocks synchronized to within 0.1 sec is easy, so there's no reason to allow up to 7,200 sec errors without punishment. Even in humanity's interplanetary future, an FTL of 600 seconds will not cause problems for the Bitcoin protocol, because blocks in which the timestamp reflects the publication time will result in blocks being received with old timestamps on other planets, not future timestamps. In the far future, once bitcoin mining starts to be performed on vehicles traveling at relativistic speeds, we might have to rethink the FTL; but until then, I recommend lowering it to 600 seconds or less. Furthermore, whenever the FTL is reduced, the DEFAULT_MAX_TIME_ADJUSTMENT limit on how far peer time can differ from local time (the "peer time revert rule", as zawy12 calls it) must be adjusted accordingly.

This is not an exhaustive discussion of potential attacks, but merely an introduction to the known attacks of concern. Discussion of a few other attack scenarios can be found in zawy12's EMA for BCH (part 2) and Hash Attack Examples Github issue threads.

Conclusion

The current DAA, cw-144, is causing a lot of problems for BCH. We should fix it. Switching to an EMA or LWMA algorithm would fix those problems. The aserti3-2d algorithm appears to be far superior to cw-144 overall, and I believe it to be the best option we have.

A good DAA should have the following properties, in rough order of importance:

  1. It should be stable, and not prone to oscillations;

  2. It should keep confirmation times low;

  3. It should keep incentives for miners to perform hashrate shenanigans low;

  4. It should keep incentives for miners to perform timestamp manipulation shenanigans low;

  5. It should keep incentives for miners to perform selfish mining shenanigans low;

  6. Because of #3, #4, and #5, honest mining strategies with steady hashrate should get near-optimal income;

  7. The chain should recover quickly after sudden changes in hashrate and/or exchange rate;

  8. The average block interval should be kept close to the target (e.g. 600 seconds);

  9. The algorithm should be mathematically simple and elegant;

  10. The algorithm should be easily understood and analyzed;

  11. The algorithm should be easy to implement elegantly and simply; and

  12. The algorithm should have few or no edge and corner cases.

In this article, I have provided some evidence that aserti3-2d likely satisfies most or all of these requirements. I encourage the BCH developer community, and the BCH community as a whole, to search for scenarios in which aserti3-2d fails to satisfy these criteria, and to compare aserti3-2d to cw-144 and any other candidate DAAs in those scenarios.

168
$ 3039.29
$ 2900.00 from @MarcDeMesel
$ 31.61 from Anonymous user(s)
A
$ 10.00 from @im_uname
+ 46
Avatar for jtoomim
3 years ago
Enjoyed this article?  Earn Bitcoin Cash by sharing it! Explain
...and you will also help the author collect more tips.

Comments

This is very helpful and informative article. Thanks for this my friend.

$ 0.00
1 year ago

Keep sharing

$ 0.00
3 years ago

Thanks for all your work on this. Getting a better DAA would be a huge improvement for BCH. Having blocks change from long droughts of no blocks followed by many in succession isn't working well or helping the BCH brand.

If we can get close to consistent 10 minute blocks that would be enormous.

$ 0.00
3 years ago

Nice info BCH

$ 0.00
3 years ago

this is what you call effort..my brain would be full of blood if i try to do this article.how i wish i have a smart brain like yours😅 Keep sharing

$ 0.10
3 years ago

Thank you for this long but very informative article :)

$ 0.00
3 years ago

Very nice article! Detailed and i now undestand more about the proposal thanks !

$ 0.00
3 years ago

Amazing work. We are so lucky to have you.

$ 0.00
3 years ago

Your clarity prevails.
Such clarity is not observed elsewhere.

$ 0.00
3 years ago

Can't follow the technical details, but I just hope the new DAA (not Grasberg) finally solve the oscillation issue and we can move forward.

$ 0.00
3 years ago

Thanks for the info but I'm not sure I really understand.

$ 0.00
3 years ago

Thanks for this.

$ 0.00
3 years ago

Always

$ 0.00
3 years ago

this is great. I really appreciate reading this

$ 0.00
3 years ago

Thankfully

$ 0.00
3 years ago

Nice

$ 0.00
3 years ago

Informative

$ 0.00
3 years ago

Very good read. Thanks for the thorough work. This is really good :-)

$ 0.10
3 years ago

To get ASERT activation to coincide with the Poisson pill activation, maybe it could make difficulty constant for 11 blocks or use the other form of ASERT that's based on the 6th block in the past, on MTP. This would force sequential "solvetimes" (no negatives although there is no problem from the negatives) and shouldn't hurt ASERT in any way since it's supposed to be equivalent.

eq 2 in Mark's paper: target_N+1 = target_M * exp([t_N − (N + 1 − M)*T − t_M]/tau )

So I'm saying this might allow it to work with the Poison pill problem.

target_N+1 = target_MTP * 2^([t_N − (N + 1 − (N-5))*T − t_MTP] / half_life )

target_N+1 = target_MTP * 2^([t_N− 6*T − t_MTP ] / half_life )

I would rename tau in the code "half life" keeping in mind it's in seconds and we usually discuss "half life" in terms of blocks. Tau in the paper is "mean lifetime" in seconds.

half_life = ln(2) * mean_lifetime

2^(-time/half_life) = e^(-time/mean_lifetime)

What about changing the testnet rule to RTT? That seems like it could fix that problem.

I believe those of us discussing it are all in agreement the current median of 3 on timestamps and clipping on how much or little it can change per block (2x and 1/2 in current DA) should not be used.

Here are the forms of EMA and ASERT that I like to see them better:

st = solvetime of previous_target.

T=600

M = mean_lifetime in BLOCKs (as opposed to seconds) = 288 * ln(2) in this article.

WTEMA:

next_target = previous_target * (1 + st/T/M - 1/M)

Notice that if st = T there is no adjustment. If st is long, target goes up. It's incredibly simple. ETH is based on this idea, but inverted which caused a problem they had to patch. Digishield can be seen as a less-ideal variation on it with a bad MTP delay.

ASERT:

next_target = previous_target * [ e^(-1) / e^(-st/T) ]^(1/M)

e^(-1) is the probability distribution's "expected" adjustment if avg st = T. e^(-st/T) is the observation. 1/M is the filter as jtoomim described.

WTEMA and ASERT are equal when using the substitution e^x = 1+x which is accurate for small x which is the case here thanks to M.

Imperial college researchers independently discovered ASERT this year and did a good derivation. https://arxiv.org/pdf/2006.03044.pdf

Mark Lundeberg intuited and suggested it around October 2019 in the BCH gang channel.

This algorithm with this half-life=288 is possibly the best algorithm for all coins. This is not just another attempt to get DA right. We're finally getting close to what DA should have been from the very start of BTC.

UPDATE: Don't forget the peer time revert rule. If still present in BCH, it should be changed from 70 minutes to FTL/2 (or just remove peer time and use local time) to prevent various Sybil or eclipse attacks on time. It should be re-written as a a function of FTL instead of a separate constant so that the code follows the security assumptions.

$ 0.00
3 years ago

Wooow! This is a great news for BCH. Thank you so much for this sir!!

$ 0.10
3 years ago

Yes this Is

$ 0.00
3 years ago

I know nothing about the calculations, algorithms and math side of the chains. However, I know that stable and predictable mining + security are crucial for well-known/developed Blockchain. Once IOTA had problems with its nodes, not just the price dropped but also the enthusiasts questioned the security of their assets. If we are dedicated to evolve existing Blockchain into a better version, the fundamental aspects must be the strongest parts of our Blockchain. Even if we comine use case, price, incentives, marketing etc, they are not as crucial as the operational aspects. Your article is quite informative and I believe that what you are saying makes sense for hundereds. I hope a better chain will be developed altogether. Thanks for your precious efforts @jtoomim

$ 0.50
3 years ago

Good luck to this sir. All the best for #BCH and #ReadCash .. thank you for sharing.

$ 0.00
3 years ago

A good news to all. Thank you for sharing thks article.

$ 0.00
3 years ago

Wow. It's too long. A research work is always worth reading. Thank you for this info. sir. It helps a lot.

$ 0.00
3 years ago

Wow all are Genius. I can only wish for the best for #BCH and in this Platform. Kudos to the author sir your a Genius. And everyone who understands this. Awesome people.

$ 0.00
3 years ago

I'd support this change, but really hope it won't cause another shitstorm 😣

$ 0.10
3 years ago

Am not good with all this technical staff but if it will help the BCH ecosystem you have my total support.

$ 0.10
3 years ago

Thanks for this very helpful article sir

BCH

$ 0.00
3 years ago

I believe adopting a DAA would gain a massive growth to BCH..

$ 0.10
3 years ago

Learned a lot from this article.

$ 0.00
3 years ago

Thank you sir for this article. I know many important things from your article. It's very informative article.

$ 0.00
3 years ago

Thanks for your research. But i need you to break this research into two so it can be digest easily.

$ 0.05
3 years ago

Bitcoin fan

$ 0.00
3 years ago

Some questions to the list in the conclusion:

  1. It should be stable, and not prone to oscillations;

Oscillations of what? Do you mean block interval variance or hashrate switching? Realizing these are distinct phenomena opens a wider realm of possible algorithms. We could design an algorithm with extremely low time variance that still incorporates hashrate switching (all miners jump into BCH for few seconds near the 10m target, mine a block, then leave). Is this desirable? Why? Why not? What about orphaning (low variance should increase it)? If desirable variance is not 0, then what is the best value?

  1. It should keep confirmation times low;

What does it mean "low"? is it 10 minutes, 10 seconds? 1 second? PoW is inherently slow. If we're about to compete with credit cards then PoW is not good enough, whatever algoithm we choose. We should then focus on other solutions like PoS, double-spend proofs, Avalanche, Storm, etc. If any of these is a good solution, then maybe DAA research is all vain and pointless, because oscillations don't matter that much, current algo is good enough or totally the opposite - we have to slash PoW altogether?

What does the confirmation time mean in the first place? What does it mean to be confirmed? Maybe we should start nuancing this and stop considering it a binary value but rather think about levels of confidence measured in electricity costs (see: https://www.crypto51.app/)?

  1. It should keep incentives for miners to perform hashrate shenanigans low;

What do "hashrate shenanigans" mean exactly?

  1. It should keep incentives for miners to perform timestamp manipulation shenanigans low;

This one is quite easy to define: a time diverging from the actual time of mining. But how to measure that?

  1. It should keep incentives for miners to perform selfish mining shenanigans low;

Cool.

  1. Because of #3, #4, and #5, honest mining strategies with steady hashrate should get near-optimal income;

What does "honest" mean?

  1. The chain should recover quickly after sudden changes in hashrate and/or exchange rate;

How is "quickly" defined?

  1. The average block interval should be kept close to the target (e.g. 600 seconds);

How is "close" defined?

  1. The algorithm should be mathematically simple and elegant;

Why?

  1. The algorithm should be easily understood and analyzed;

By who? Most people are incapable of matematical analysis, so it won't be easy for them no matter what.

  1. The algorithm should be easy to implement elegantly and simply;

I agree, the ease of implementation means lesser risk of bugs and therefore accidental hard forks.

  1. The algorithm should have few or no edge and corner cases.

Same as above.

In summary - despite a lot of interesting research, it still looks like we've not even defined the exact problem, it's just a gut feeling that something is not right with the mining and stuff.

$ 0.10
3 years ago

despite a lot of interesting research, it still looks like we've not even defined the exact problem

That's because there isn't one exact problem. There are many exact problems -- roughly a dozen. A good DAA needs to solve all of them.

It would be great if this problem could be distilled into a single simple definition of the problem. It would also be great if we had a grand unified theory of physics. The fact that we don't have a GUT for physics should not stop us from building automobiles.

Like a good DAA, an automobile has to solve dozens of different problems simultaneously. It needs to be safe, efficient, fast, comfortable, beautiful, reliable, powerful, quiet, legal, maneuverable, good for hauling gear, good for towing, etc. If you try to "define the exact problem" that an automobile will need to solve, you're going to miss a lot of important stuff.

Oscillations of what? Do you mean block interval variance or hashrate switching?

This is like asking of light: "Oscillations of what? Do you mean magnetic field, or electrical field?" They're different aspects of the same phenomenon. Hashrate switching causes hashrate changes, which cause the difficulty changes, which cause profitability changes, which cause hashrate switching, which causes difficulty changes, etc.

Both block interval variance and (the large incentives for) hashrate switching cause problems. One causes problems in user experience in confirmation times. The other causes problems in mining fairness, decentralization, selfish mining resistance, and 51% attack susceptibility. We need to fix both problems.

What does it mean "low" [for confirmation times]?

For a blockchain that targets 600 seconds between blocks using PoW (Poisson process), the theoretical time per confirmation should be 600 seconds. A good DAA will keep confirmation times close to this value. Better DAAs will be closer to 600 seconds than bad DAAs. aserti3-416 gets confirmation times down to about 660 seconds, whereas cw-144 gets confirmation times around 1200 seconds. 660 seconds is lower than 1200.

What does the confirmation time mean in the first place?

That is the expected amount of time a transaction will need to wait until the first block is mined which could contain it, assuming that the transaction is issued at a uniformly distributed random time.

What do "hashrate shenanigans" mean exactly?

Hashrate shenanigans means using changes in hashrate in a fashion intended to game the DAA to gain more profit than an honest miner applying steady hashrate would.

This one is quite easy to define: a time diverging from the actual time of mining. But how to measure that?

We don't need to measure the actual frequency of timestamp shenanigans. We just need to measure the incentive for timestamp shenanigans. We can do that by using math and simulations to model how much profit an attacker can gain from a successful timestamp manipulation attack.

What does "honest" mean?

Same as in Satoshi's Bitcoin white paper. It means a miner who always mines on the current valid chain tip, includes all transactions that pay a sufficient fee and that aren't double-spend attempts, and publishes their blocks immediately.

How is "quickly" defined?

In the same way as "low" was earlier. Faster is better, ceteris paribus.

A DAA has multiple opposing goals. Its ability to satisfy all requirements comes with trade-offs that need to be made between the different requirements. Speed of response is naturally in conflict with stability. How exactly to make that trade-off will always be a subjective judgment call. Some people are best served by a sports car, and others are better off with a minivan. We need to decide which algorithm is right for BCH specifically.

How is "close" defined?

See above. That said, this particular requirement is easy to satisfy far more accurately than necessary, so the trade-offs aren't too relevant for this particular requirement.

Why [mathematical simplicity and elegance]?

Because people will be working with this algorithm for years or decades. Building a system on an inelegant foundation slows everything else down, and makes the work prone to errors and unexpected behavior.

By who?

By the people who need to validate it and review it, and ensure that it is not vulnerable to attacks. By the people who need to implement it. By the people who need to maintain it. By the people who need to explain it to newbies 20 years later about why they can trust that the system works and will continue to work.

Obscurity is not security. You can't be sure that a system is resistant to attacks if nobody currently understands it well enough to be able to imagine an attack.

P.S.: This comments section usually works better if you make multiple small comments rather than one large comment. The formatting and the edit windows on single large comments aren't great.

$ 2.00
3 years ago

That's because there isn't one exact problem. ... It would be great if this problem could be distilled into a single simple definition of the problem.

The goal of the difficulty algorithm is to estimate the current hashrate to set the difficulty so that the average solve and confirmation times are as close as possible to the block time, but random enough to prevent orphans.

Assuming we want solve times to follow the exponential distribution it can almost be simplified to "Estimate the current hashrate." That is used to set the difficulty via target = 2^256 / HR / Block_time. But the hashrate depends on the exchange rate, fees, and on the difficulty itself, so the first definition is more complete. We have to use past difficulties to estimate what the hashrate was, but the difficulty we choose affects the hashrate. We could know what difficulty was needed more accurately if we knew the exchange rate and fees, but the difficulty does not have immediate access to fees and no access to exchange rate.

$ 0.00
3 years ago

have you tried mining yourself

$ 0.00
3 years ago

1) Oscillations in difficulty. We do not want low variance in block time because of orphaning

2) By "low confirmation time" he meant we want the average confirmation time to be as close as possible to avg block time. If it gets longer, it's evidence of switch mining causing delays. It's not possible to have confirmation time less than avg block time. To calculate confirmation time you pretend there is 1 tx per second. The avg confirmation time for blocks i=0 to N with a solvetimes "t_i" is then sum( t_i * (t_i+1)/2 ) / sum(t_i).

3) He meant on-off mining to get higher revenue than other miners.

4) Timestamp manipulations refer to using the MTP, FTL, and timespan limits to make the difficulty go high or low in a public or block withholding attack. ASERT his FTL=600 and the 3 other things I mentioned in my post above prevent all but one of the possible attacks. The remaining attack is not worth worrying about.

6) Honest means the miner obeys consensus rules and possible network-friendly less-formal suggestions. For example, honest miners agree to not collude to perform a 51% attack to exclude other miners, which they could do and it does not violate the consensus rules, but so far have not done so, possibly due to fear of community retribution and/or lost faith in the coin which hurts their long-term mining equipment investment.

7) "Quickly" means fast enough to respond to exchange rates. The main goal is to minimize switch-mining excess profits so that constant miners do not lose out.

8) Close means as small as possible. It turns out ASERT keeps it exactly at 600 seconds within less than 1/2 a second so it's good.

9) Because people could argue against arbitrary decisions. A simple calculation implies it is the most mathematically correct way to estimate network hashrate and thereby set difficulty. ASERT fits that description. Simple also means there's less likely to be a code error. Everyone I know of who is very knowledgeable about difficulty algorithms understand it and agree it is the best choice.

10) There are some algorithms that are ridiculously complicated for no apparent reason and don't work well.

despite a lot of interesting research, it still looks like we've not even defined the exact problem

No, the problem is clear: It's to estimate the current hashrate and thereby set the difficulty so that the solve time will spread out and average 600 seconds.

$ 2.00
3 years ago

2) By "low confirmation time" he meant we want the average confirmation time to be as close as possible to avg block time. ... It's not possible to have confirmation time less than avg block time.

Actually, it is possible to have confirmation times be less than the average block solvetime. An RTT can achieve this. If every block takes exactly 10 minutes, then the average confirmation time will be 5 minutes.

However, using RTTs to achieve exact block intervals like this will dramatically increase orphan rates, and is not a good idea at all. We're better off just reducing block intervals to get fast confirmations than to narrow the distribution of block intervals.

$ 0.00
3 years ago

Boosted!! This article is way too informative and deserves to reach the maximum users. Good job friend....

$ 0.50
3 years ago

Wow! Thank you very much for sharing this good news sir. This article is very informative.

$ 0.00
3 years ago

Thanks for sharing this content.

$ 0.00
3 years ago

@jtoomim I really wish there was a way I could help with your incredible efforts

$ 0.00
3 years ago

Amen

$ 0.00
3 years ago

Thanks for sharing news

$ 0.00
3 years ago

I am really wondering where you guys getting this kind of article and I must you guys really into BCH or any platform about this. Your giving as a very useful article. Thank you for sharing your knowledge to us

$ 0.00
3 years ago

Thank this, I get some idea in your article, I like it

$ 0.00
3 years ago

Good luck with your project sir. Getting better DAA for the Improvement of BCH. That's why I like reading articles when it comes to BCH. BCH for the Win!

$ 0.00
3 years ago

I still don't know much about the technical side of how crypto work especially BCH, but I am interested by the fact that it shows a lot of information that could help me to understand more about BCH. Thank you for doing and sharing this article.

$ 0.00
3 years ago

For my little knowledge about mining, I stick to not arguing about that topic and that those who know about it do so, since otherwise the lack of wisdom on the topic would result in the waste of ideas , and therefore of the bch, because mining is basic to the topic at hand.

$ 0.00
3 years ago

Interesting article algorithm shows that BCH will be crucial without DAA

$ 0.00
3 years ago

Pretty much sure bch price will grow more and more... Goodluck

$ 0.00
3 years ago

are they going to switch to some other algorithm in future or it is for life-long?

$ 0.00
3 years ago

Thank You Glad To see You

$ 0.00
3 years ago

This article has added to the little knowledge I had about crypto. Thanks for this. Really informative

$ 0.00
3 years ago

I hope i can write like that

$ 0.00
3 years ago

Thanks for sharing..

$ 0.00
3 years ago

Yes BCH value also affected by DAA

$ 0.00
3 years ago

Seems

$ 0.00
3 years ago

oh man was this hard to follow 🤔and your 2/27 vid wasn't much better, lol, clearly this stuff is just way over my head 😣 but we all have our talents, and I'm grateful for those that have put their value time towards this very necessary research..

@jtoomim I really wish there was a way I could help with your incredible efforts

imho solving the DAA is the TOP priority when it comes to BCH protocol development .. and you certainly have my vote to see this implemented in Nov (by whichever nodes can safely have it ready in time -- easier said than done of course)..

I know this was not a solo effort (as you clearly stated Mark Lundeberg was integral in this ASERT solution), but I'm well aware that LOTS of very very smart people have also been contributing to this problem/solution for quite some time now .. really hope to see the fruits of your labor pay off soon 🙏

Cheers to you all!!
👏👍

$ 0.10
3 years ago

Every node has plenty of time to safely have this implemented by November. The actual amount of changed code is very small, and we're still 4 months away, and a little over 1 month away from the feature freeze date (Aug 15) for the upgrade. So we're on schedule so far.

It's possible that not all nodes will choose to implement it, though. That's a more pressing concern.

The DAA change will also affect all BCH wallets, not just nodes. Teams like Electron Cash, Bitcoin.com, and read.cash will need to update their code with the new DAA. That's one of the reasons why it's so important that the DAA implementation be kept as simple as possible. But as long as we can come to consensus on this by August 15th, we should be fine for a November 15th-ish deployment.

$ 3.20
3 years ago

The DAA change will also affect all BCH wallets, not just nodes. Teams like Electron Cash, Bitcoin.com, and read.cash will need to update their code with the new DAA.

I don't think this is 100% true. Electron Cash is surely validating POW but to my knowledge bitcoin.com and read.cash (and a bunch of other wallets) uses RPC calls towards dedicated nodes that by themselves validate the POW.

It would be interesting to see some kind of survey of what parts of the eco-system that will be impacted by changing the DAA.

$ 0.10
3 years ago

It's possible that not all nodes will choose to implement it, though. That's a more pressing concern.

yeah, that's more what I meant, POLITICS, lol..

I really have no opinion other than to defer to those I believe are putting in the "quality" time and energy to make the BEST decisions for BCH .. once upon a time, I put my trust in the Bitcoin Legacy devs to do the right thing, but they chose to screw us ALL .. this time around, I'll at least "TRY" to follow and keep up with the protocol stuff..

very impressed with everything that you're doing .. whatever YOU decide will have my full support 😉

$ 0.15
3 years ago

Hi there,

I wanted to point out you made a couple of mistakes with regards to the FTL: First you write that it is a node-policy, this is incorrect. It actually is part of the consensus rules. This can be observed by the fact that it was never in the policy.h in the Satoshi client.

Your attack using the FTL seems to trip over itself. You give as an attack that a miner mines at the edge of the FTL and then continues mining on it in secret in order to give himself a benefit.
If this is your attack then the FTL is irrelevant because he was not broadcasting his blocks, and without broadcasting his blocks he is not held to this rule.

Maybe you were thinking that the miner instantly broadcasts his in-the-future block. In which case all miners benefit from this equally and this is really not an attack at all.

Maybe the FTL would benefit from change, I don't know. Your attack scenario doesn't seem to give me the impression that it has to change.

ps. FTL isn't Faster-Than-Light, in this context, it actually is used as Future-Time Limit.

$ 0.25
3 years ago

You're right on the consensus/policy issue for FTL. OTOH, the ABC/BCHN policy is actually a policy, not a consensus rule.

As for the mining strategies to exploit the FTL: the main one I referred to is a selfish mining variant. In selfish mining, the attacker will mine a chain which they keep secret as long as their chain is ahead of the honest chain in chainwork. As soon as the honest chain gets close, the attacker will publish their chain, thereby invalidating all of the work done by the honest miners, and capitalizing on any difficulty reduction they may have achieved in the interim. Mining blocks with timestamps near the FTL does indeed confer a benefit. Lowering the FTL reduces the magnitude of the benefit that can be achieved.

$ 0.00
3 years ago

really have no opinion other than to defer to those I believe are putting in the "quality" time and energy to make the BEST decisions for BCH .. once upon a time, I put my trust in the Bitcoin Legacy devs to do the right thing, but they chose to screw us ALL .. this time around, I'll at least "TRY" to follow and keep up with the protocol stuff..

very impressed with everything that you're doing .. whatever YOU decide will have my full support

$ 0.00
3 years ago

Hello thank you for this info. I like trading in cryptocurrency. Even the market is very volitile and its hard to trade but still profitable and need to control greediness.

$ 0.00
3 years ago

Wow!! This is good news for BCH. Thank you so much sir.

$ 0.00
3 years ago

Such a long article though I didnt get that much but I can say its good. Wat we want is for a better #BCH .. So good luck to you sir.

$ 0.00
3 years ago

Wow. Great article for sure. Deserve for the tips

$ 0.00
3 years ago

I hope this DAA will fix the issue of the entire BCH problems. But I'm pretty sure that there is one proposal that can be used to execute those problems. But I hope social aspects such as norms and egos of the community will stop because it can't help to be honest.

$ 0.00
3 years ago