Recently, Jonathan Toomim has written a detailed proposal for addressing a significant issue on Bitcoin Cash. That issue has been that "benevolent" miners are losing money relative to profit seeking miners. This is undesirable because Bitcoin is supposed to run on profit motives. If miners who drive the chain forward are not profiting properly, the system is effectively broken.
This problem is also why the current Difficulty Adjustment Algorithm (DAA; the system responsible for ensuring that coin issuance is stable and fair) was put in place relative to the previous Emergency Difficulty Adjustments. Much of the analysis that Toomim presents was done at that time. Toomim’s proposal is that we switch to an algorithm called ASERT.
The proposal’s problem statement correctly points out that the DAA is responsible for this. It also points out that no Difficulty Adjustment Algorithm as currently proposed can entirely fix this problem. This is fundamentally due to a lack of instantaneous price information. This information is only able to be inferred by the behavior of miners, and based on noisy data. The calculation of the DAA must only rely on internal blockchain information, or consensus cannot be reached, and there would also be the problem of trusted oracles.
Fundamentally, the DAA has two pieces of information about what the current hashrate is: Block times, and Chainwork. Both of these are noisy data sources:
Block time has two sources of error: Miners, for whatever reason, may not update the time of their blocks to be accurate to when they were mined -- this is shown in practice. There is also a potential to manipulate these timestamps in order to affect difficulty and game the algorithm.
Chainwork for a particular block is the sum of the estimated number of hashes required to have produced said block if the entire chain was recreated from scratch. It may be thought of as the blockchain's enthalpy. However, because Chainwork is based on these block hashes, they are necessarily probabilistic. As such, each block's chainwork has some divergence from the actual number of hashes required to produce the block.
As such, the noise in the data introduces a fundamental minimum error. The error in these parameters are independent of any choice in DAA. These errors limit the ability of a DAA to respond to hash that is changing over a period of time that is bounded by a minimum of ten minutes, and increases the more switch miners there are.
However, I believe the simulation that is being used is also fundamentally flawed. As one of the original authors of the simulation that Jonathan used, I believe there are several important flaws that impact the validity of its output.
First, the simulation makes fundamental assumptions about the behavior of greedy miners. These assumptions are likely not valid. We are currently observing strategic greedy miners that exhibit second order thinking. They can, and are currently, strategically applying their hashrate, and manipulating timestamps, in order to create future changes in the difficulty adjustments.
Additionally, the lead author of the simulation made relative profitability of mining Bitcoin Cash an independent random variable. This assumption makes the calculation of revenue ratio based on current hashrate seem independent of miner behavior. It is not.
Finally, there is no error introduced into the timestamps. There is significant variability in the actual timestamps, and time it takes to produce a block.
In reality, the greedy miners have a significant impact on the future exchange rate based on their behavior. Because of limitations of 10 minute block times, and a larger response to variations in hashrate, switch mining actually becomes more of a problem, not less.
Greedy miners, with significant hashrate available, have a large incentive to switch large pools to BCH for brief periods of time, and mine a block and leave. The result will be that the next block will have a significantly increased difficulty, and the block after it having a much reduced difficulty. As a result, selfish mining becomes a natural response to protect profits. If you are a miner taking the much reduced profitability of the next block, it makes rational sense to withhold it and mine an easy block on top.
These scenarios are explicitly not discussed in Jonathan’s proposal, but deferred to the following issues: EMA for BCH (part 2) and Hash Attack Examples github issue threads. However, the fundamental problem here is that the DAA cannot adjust until a block is found. If a block is found quickly, the difficulty will rise substantially, and it cannot be reduced until a new block is found - even if that difficulty increase was due to hash which has left the chain, or by timestamp manipulation.
Interestingly, from the analysis one proposal from ABC (cw-16-sha) for fixing the issue of resonances. It does so by modifying the current DAA (cw-144) to use a randomized window based on the current block hash. This also prevents any resonances from forming within the difficulty targets. From testing in 2017, it also outperforms other algorithms on all attacks at the expense of creating larger block solvetime variability. However, this still does not solve problems associated with switch miners. ABC has not chosen to implement this due to a variety of other factors not considered in Jonathan’s proposal.
One of those factors is that changing the DAA is a significant undertaking, because it does not just require updating the node software, but it also requires updating every SPV wallet in the ecosystem. This is no longer simply a problem for exchanges, and miners, but for every user. It is nearly impossible to ensure that every user gets a notification that they need to update their wallets - and will necessarily involve interrupted service. In 2017, the user base was smaller, and more engaged; this made such a change easier. Also, the magnitude of the profitability issues for benevolent miners were significantly larger than they are now.
As such, changing the DAA should be considered carefully before a change is made. I applaud Jonathan’s effort and thorough analysis of the DAA’s choice. However, I would like to see:
Simulations performed under constant price ratio, so that miner behavior can be better seen.
Add select timestamps based on the probability distribution, so that short term profitability variations show up more often due to random chance.
Faster switching of rational miners. Pools can change chains very quickly, the cost to do so is miniscule. Very small price fluctuations due to solvetimes can dramatically impact profitability.
Better tests for various rational miner behavior.
Finally, and most importantly is there a way to make better data available to the difficulty adjustment algorithm by fundamentally changing the way we think about the problem? The proposal does not consider any alternatives which would fix issues with difficulty adjustment algorithms entirely.
Some of these other options are:
Real-time targeting (RTT). Where wall clock time impacts what the difficulty accepted will be, and can be soft-forked into the protocol.
Avalanche consensus to determine which block is chosen every 10 minutes.
All of these options have tradeoffs, but they all allow block times to be significantly less variable, and prevent selfish mining, and greedy mining, by ensuring that real-time price data is trustlessly provided to a DAA in a way it can react to quickly.
I have not seen any evidence of this, and you do not cite any evidence to back up this claim. I believe your claim is false.
You may be thinking of Zander's finding of occasional negative block solvetimes on the blockchain. This does not mean that those timestamps were intentionally manipulated; it could just be that node operators don't have very precisely set system clocks. If this were intentional and malicious behavior, we would expect that these negative solvetimes would be clustered near the legal limit for timestamp manipulation. If this were accidental and non-malicious behavior, we would see these negative solvetimes clustered close to 0, with larger offsets being much rarer than smaller offsets. What we actually see is the latter: almost all negative solvetimes are smaller in magnitude than -20 seconds.
What the simulations show is that second-order thinking is unnecessary to explain the current oscillations. That is, first-order thinking is sufficient to explain the oscillations. You have not shown otherwise, nor have you shown second-order thinking to be present.