Nice talk and thank you for hosting it and for the transcript @blockparty-sh! I already watched it but having a transcript helps a lot with making comments later, and here I'd like to make some comments.
With these discussions and CHIP iterations, we're not only moving forward with the Group CHIP, we're also pioneering a process and raising the bar for future consensus upgrades which is good for BCH image! We're finding ways to assess and mitigate risks in order to have the features. Alternative would be to say there's risk we can do nothing about, and give up on the feature. Some proposals could have unfixable problems and this rigorous process will expose those and they'll never make it. When it comes to Group, I'm betting with a lot of my time and energy that this solution has what it takes to make it through! I would not waste my energy to prove that something is not good. I want to prove that something IS good, and then actually have that good thing.
So I want to note that in a consensus-based token system, because it's consensus, interpretations affect whether a node views a chain as valid, and that is independent of proof of work. So no proof of work does not resolve this, in fact, if different clients have different interpretations towards token state in a consensus based system, it will lead to a blockchain fork. So a problem that was previously a token system problem is now a problem for the entire blockchain. So I just wanted to bring that up in the correction.
That risk is contained to the 6 node implementations and is not specific to Group Tokens. Any consensus change, like a new opcode, suffers from the same problem, does it not? That's why we need to assess and mitigate risks, alternative is to not have the feature. I think we agree here, it's about risk vs benefit, and how big is the risk and not whether there's risk. There is risk, and there are holes in the proposal to identify and mitigate this risk, but I believe those can all be addressed. It's not a weakness of the Group Tokenization itself, it's a weakness of the current state of the CHIP.
It looks like the ecosystem is usually converging around nodes with the most developer man-hours, then shouldn't that help with mitigating the risk? Risk can be defined and mitigated, and probability can be brought close to 0 or even 0. If we define risk as probability times severity, then let's find a way to drive both down as low as possible while not giving up on the feature.
Node software is sure to get a lot of eyes and testing before being activated. Probability is in node developer hands. Severity not so much, it's more like inherent in the design, and will also depend on characteristics of the offending TX, which is in spender and recipient hands. The consequence can be for both or just the spender and everyone else who happens to depend on the same node implementation.
Yes, and it massively impacts the confidence of the entire ecosystem, including BCH itself. Instead of having a problem that is isolated to one token system.
Now consider SLP consensus failure which can result in a total loss of funds. What impact on the image would there be if SLP had a catastrophic failure and loss of funds? I'd argue that moving the risk of having tokens from consensus to userland actually increased the risk for the whole ecosystem, and you give up levers of control because it's permissionless so it can alone grow and blow up eventually, affecting BCH image in that. That risk never goes away because every new wallet/middleware software introduces it independently of what everyone else is doing.
If there was a competitive miner validated token solution from the beginning, then nobody would have bothered with SLP. If popularity of the current SLP solution increases, the risk only grows with time because it'll be more funds at risk, more different software out there so both probability and severity increases with popularity. It can't be simply fixed because it's not a software issue, it's an architecture issue, which is not really new and has been known from the start. SLP middleware can serve faulty data to a 100% compliant wallet, and the wallet will have a choice: trust it and risk catasthrophic failure, or do the DAG walk itself but then that doesn't scale. So the choice is between security and scalability. Only a miner validated solution gives you both.
To reiterate, in the risk VS benefits framework can we agree that these premises hold?
As SLP adoption grows it increases both risks & benefits
With miner validated, post-activation risk remains fixed or close to fixed, while the benefits increase with adoption
Ok to be fair here I'm arguing for any kind of miner validated tokens, or even Script validated tokens which are also miner validated but it's another layer so there's some distinction. Still, a consensus failure in interpreting some opcode or sequence of operations would similarly result in a fork like Group tokens would, would it not? I don't see how this risk of a fork is different when comparing a native vs Script token implementation. If we had all the primitives to build Script tokens now, then you could say there's no new risk with Script tokens. But if enabling Script tokens requires a consensus change, then it also introduces this risk of consensus failure.
If one is in favor of such a proposal, one would say that the problem is whether tokens should be first class. If one is not in favor of this proposal one would say that the question is whether the benefits to tokens are worth the strict increase in risk to BCH itself. Because they are now brought under similar risk paradigms.
Agreed, this is a weakness of the current proposal and I'd argue not of the solution itself. We need to make a better case for it because there are indeed gaps in the proposal which I hope to fill as soon as possible, and I believe them all to be fillable!
I also want to address the supposed difficulty that is created by script-based tokens like ERC-20...
I recently realized that Group is conceptually closer to ERC-1155 than ERC-20 in that Group is also just one global token contract written in C++, where users will interact with it by spending and crafting BCH TX outputs. Thought it might be interesting to share this view and see what others think of the comparison, maybe add a little weight to the benefits side of the scale.
It could be a stretch, and I admit I didn't dig deep into ERC-1155 but from that blog post I can make some observations which I'm presenting here for consideration:
the core concept behind ERC-1155 is that a single smart contract can govern an infinite number of tokens.
You can similarly think of Group is single such contract, implemented with the most efficient language: C++
ERC-1155 tokens are the first type of token that can execute a deterministic smart contract function by simply sending a token to an address.
Group genesis is also done by simply sending some BCH to an address, where the destination address will have the new token ID "issued" by a hash function.
From then on, interaction with this one global "contract" is done by spending that genesis output and crafting other outputs.
One of the major updates to the ERC-1155 standard in the last year was the decision to move all metadata about a token to an external JSON file.
With Group, it can be internal or external, you can embed the actual data in the genesis TX, or you an embed a hash of it. This way it's not editable later on and the group ID itself carries it compressed in the hash, it becomes a commitment of the metadata.
The ERC-1155 standard guarantees that event logs emitted by the smart contract will provide enough data to create an accurate record of all current token balances. A database or explorer may listen to events and be able to provide indexed and categorized searches of every ERC-1155 token in the contract.
BCH "event log" are the actual TX-es. To build your DB you need to linearly parse the entire blockchain exactly once. Then, you monitor TX-es as they come in and add to your DB and you have an accurate state for ALL tokens. Or you can simply filter the ones you're interested and ignore others.
Imagine being able to rewind and fast-forward through time and analyze usage patterns and life cycles of completely different ERC-1155 tokens, regardless of which smart contract is being analyzed.
Yes, imagine :) Group can be used inside Script contracts, too, because every Group UTXO is also a BCH UTXO. To be fair, current Script capability can't make them interact with each other but only place limits on how they can be spent. However, the interaction from Script path is open with some introspection opcode in the future. Note that Group tokens CAN interact with each other and BCH, just not through Script but through existing SIGHASH capability which enables atomic swaps, DEX, and CoinJoin/CashFusion.
Properties; namely authorities, the fenced BCH, the covenants, and the fact that the authorities can change confidence on the fly. And not to mention that we are also introducing a new number system that is not even mentioned there.
Each of these changes warrant its own evaluation. It is a massive complex change and I'm very skeptical of the claim that it is simple, or elegant, or anything of that sort.
I accept that the scope with all those features was too big to chew. Dropping all advanced features and keeping only MINT, MELT, and BATON authorities should help in that regard. In the working version of the CHIP, these reduced features are introduced from the ground up:
TRANSFER + GENESIS, now we have basic tokens where supply can only be created at genesis
Just the MINT, now we can make more supply later on. It may help to think of MINT as a "magical" token amount, which can be fanned out to any number of ordinary or magical outputs.
This allows us to:
Issue new tokens as long as some MINT UTXO exits,
Transfer ownership of the MINT authority the same way ownership of BCH is transferred: by spending it to other addresses,
Provably destroy minting capability by spending all MINT carrying UTXOs without creating a MINT authority "change" on the output side,
Verify a token supply by filtering for only those TX-es which have a MINT on either side of the TX. Combined with a Merkle proof we can make an SPV proof of the max. token supply. Note that this proof would grow linearly with the number of authority transactions.
Through SIGHASH have token/BCH atomic swaps enabling DEX and token CoinJoin/CashFusion.
Few more on top of that:
Add the MELT, now we can destroy tokens with clear intent and permission so to make auditing the exact "official" supply possible,
Add the BATON, now MINT and MELT become one-time use when alone and they can be fanned out only if accompanied with this BATON bit.
I suppose that depends on your interpretation, because obviously you're recording additional data and fields in UTXOs, so whether that is a chain, whether or not indexing anything additional in UTXO or is no change in UTXO. I suppose that is very up for interpretation.
You need to index only if you want to provide a service which requires a token DB, like a block explorer or if you want to generate SPV proofs of supply. With Group, that index can be built in a single pass of the blockchain and maintained by monitoring new blocks. If you're a validating/mining node, you don't need an index, you only need to verify the rules in the local scope of a single TX. But I think the argument was about something else anyway, just thought it'd be good to clarify this indexing requirement.
I think SmartBCH is exciting, we now have our horse for DeFi fueled by BCH, I've been thinking what this means for tokens.. there's still value in Group IMO, and it would be even more value if Group tokens could be moved back and forth, so you as a holder could choose whether to have them secured by BCH PoW and used as cash, or moved over to the sidechain to interact with full power of DeFi. If we're going to have upgrade cycle of 1 year, then whatever we don't agree to include now will have to wait a whole one year, what will happen with the world until then? SmartBCH has an advantage there, they're not constrained by the pace of HF changes. To be fair, anything implemented in Script can be changed independently of HF schedule but since it's not Turing-complete it will always be limited by the available primitives, addition of which requires HF.
Why did SmartBCH choose BCH as fuel? They didn't need permission to use BCH, and they wouldn't need a permission to start using something else. I don't think it will happen because I believe incentives are aligned, but let's not forget that it's not in "our" hands what SmartBCH does.
So, I'm the original guy who proposed moving to a one year schedule, so you probably already know where I stand. Um, but the justification for moving to a longer schedule is that, we were on a six month schedule, and it had terrible, terrible, terrible records, such that BCH suffered, not from a lack of excitement over features, but from a lack of confidence all together. And that is not something that can be addressed by any given feature, even, if, you know, it is 'the bee's knees'. Throughout all of crypto is revolutionary, none of those features will address a lack of confidence. And every time any shenanigans, whether technical or political happens that threaten's people's confidence, we are that much further away from peer-to-peer electronic cash for the world, and that is what I'm most concerned about.
I think I agree here, BCH really suffered a lot from the image problem. I believe the CHIP process which is emerging will help in that regard! If Group succeeds it will not be only a success for the proposal, I'm hoping it will be a success for the process! I'm hoping for us looking back and seeing how we went from "rejected by default" to "an ironclad proposal has been made through iteration and with the help of everyone who interacted with it."