"And do so without jeopardizing security or decentralization."
Nope. All these "instant sync" schemes have serious censorship risks, as the instant sync only proves the rules were followed; it does not and can not give you the ability to participate.
While an good idea, unfortunately it couldn't possibly work with Bitcoin.
The reason Coda works is that it's an account/balance based system, so you have a list of accounts with balances, create a Merkle tree of the balances and then when you update the balances you make a proof of the how the Merkle tree changes.
When SPV wallets request their balance they are provided a Merkle path and a SNARK. They input the Merkle root of the path into the SNARK to find out that indeed this a branch in a Merkle tree in a sequence of valid state transitions.
Bitcoin doesn't work like this and so this scheme can't work. When Bitcoin undergoes a state transition e.g. a block gets validated and appended, the whole blockchain is open to be queried: transactions from years ago in blocks long gone might be spent. Contrast this with Coda whereby only the last balances are queried.
The resulting difference being that the (private) input size to Coda's SNARK proving will be a small list of balances whereas the (private) input size of such a scheme on Bitcoin will be completely out of the bounds by orders of magnitude.
Thanks for feedback. When Bitcoin transitions it doesn't need to query any part of the chain, only the utxo set. It just checks for entries, adds entries, and removes entries. I don't think that will be prohibitive.
The utxo set could also be expressed as a merkle tree or similar. I'm planning for my next article to be about ways we could represent the state. If you don't mind I'll try to ping you when it's ready to hear your thoughts.
>When Bitcoin transitions it doesn't need to query any part of the chain, only the utxo set
The UTXO's exist within the chain though. What distinguishes a fake transaction from one which could have been injected into UTXO, for example a double spend? You need to know that it has a Merkle path with a Merkle root which exists in a valid block.
>I don't think that will be prohibitive.
The most recent research on high-end hardware I can pull from memory is the STARK paper (there's another on sharding SNARKs too). If I remember correctly to play around with a database of <100mb they used a computer with 128GB of RAM. And they were doing a simple search over it, nothing fancy.
>If you don't mind I'll try to ping you when it's ready to hear your thoughts.
Yeah sure. Sorry for the pessimism but I feel like when you dig into a little bit you're gonna have to take a step back and be like "shit this is unfeasible". I've been a little down this road myself. Some things to consider:
\- SNARKs take inputs of a predetermined size (you can bound them but then you have to always overestimate and your proving time will suffer)
\- If you wanted to have a SNARK taking different predetermined sizes you have to have a different proving key (which means the recursion gets a lot more complicated)
\- Just simple operations like matrix multiplication can become prohibitive larger, Bitcoin validation rules have so much branch flow
\- Proving time for SNARKs scale as n log n in the number of gates and the numbers of gates blow up really fast as you move to more complex programs. For example, I wrote a circuit just to validate the transaction format, not even the scripts. It bascially just concatenated scripts and values together to make a Bitcoin tx and checked the outputs sum to the inputs - the circuit is almost prohibitively large already (can check it out [here](https://github.com/recursivesmelting/rs-proof-lib)).
We'll see how performance improves over time. This scheme only needs to be viable before we get to a point of too few remaining full nodes.
I'll have to look into all your advice more deeply.
>The UTXO's exist within the chain though. What distinguishes a fake transaction from one which could have been injected into UTXO, for example a double spend?
The utxos would be part of the authenticated state and can only be manipulated according to valid circuit rules. All txs that manage utxos will read and write from this set directly. Same as a utxo cache in a full node.
> I wrote a circuit...
Are you the RS developer? I'm a big fan of RS. Great work.
>This scheme only needs to be viable before we get to a point of too few remaining full nodes.
Yeah. For the moment being I'm sure this scheme can be simplified to just speed up SPV wallets. So they don't have to download the chain of headers to check, just receive a proof everytime they wish to refresh their balances.
The SNARK statement would just have the last header + last proof as private inputs and the current header as public input and then just:
Check the prevHash in current header matches hash of the previous header that then verify the last proof or return true for the genesis block hash.
Just throwing ideas out there, its a lot weaker but doable today I expect. Have a muck around with xjsnark, it's a great tool (well the best DSL for SNARKs at the moment).
>Are you the RS developer? I'm a big fan of RS. Great work.
Thanks :D It's me and a colleague. Slow work, it's extremely fiddly. Everything works theoretically, it's just getting the tools needed to do it. If the operations were lower level we could just appeal to libsnark directly, however the nature of the problem means we're stuck with decent but no really production ready tools for programming SNARKs at a higher level. I'm familiar with Coda because I struggled with their incomprehensible DSL Snarky for quite sometime (only the developers themselves can make sense of it I imagine). The projects been on the backburner for a while, going to make another stab at writing the elliptic curve bouncing stuff soon as I get some time which is the last piece to the puzzle.
Cool, I check out xjsnark. I was trying Snarky but like you said, it's hard to grok. Not knowing much OCaml doesn't help.
> Check the prevHash in current header matches hash of the previous header that then verify the last proof or return true for the genesis block hash.
This is exactly what I'm aiming for for my first PoC. Something that would work if you assume the only rule is that blocks must be linked. Then generate a proof for the current chain tip. That should prove out the concept a little and give a framework to start fleshing out the rest of the non-circuit processes. Then the actual consensus rules can be built up bit-by-bit.
>Cool, I check out xjsnark. I was trying Snarky but like you said, it's hard to grok. Not knowing much OCaml doesn't help.
I learnt OCaml just for Snarky and it grows on you, the pattern matching stuff is beautiful. But yeah dude, don't stay with Snarky for too long. I was like "It's not you it's me" for a long time before I gave up - nothing is documented it's really infeasible to make anything.
>Something that would work if you assume the only rule is that blocks must be linked. Then generate a proof for the current chain tip.
Yeah this is definitely doable, you should be able to write that up pretty quickly in xjsnark. The only downside of xjsnark is that is has 0 verifier gadget out the box...which means you're going to have to write that yourself.
Anyway, if you fancy, drop me a private msg on reddit whenever and I can lend you a hand over some private messenger some time.
Awesome. I think. What time frame for a proof of concept overlay network for this?
Peter__RPeter Rizun - Bitcoin Researcher & Editor of L2 months ago
If the rules change, then the proofs change too, right? Wouldn't we then need to change the details of the proof system each time we change the rules (a hard or soft fork)? Would not a new node coming online need to trust that the details of proof system they are using to verify the proofs is the correct one? Is this not then like a proof-of-github analogy to the proof-of-work (SPV) security you'd get by trusting the UTXO commitment? I don't see where the extra security comes from.
This is a cool idea with cool math behind it, but unless I'm mistaken it seems to be a lot more valuable in a future where the protocol is truly "locked down" than in a future where the rules are being changed every 6 months.
> As the chain grows in size and the reliance on proofs increases, who is incentivized to keep the entire chain any more? Do we need it for anything?
It's not a consensus change, so if it'd be implemented now, future forks if they must happen would be cleaner - any messing with the state and divergence of states would be immediately detected by main players
Hi Peter, thanks for reading.
> If the rules change, then the proofs change too, right? Wouldn't we then need to change the details of the proof system each time we change the rules (a hard or soft fork)?
Softforks should work just like now. Only miners need to adopt new code to enforce it and existing clients will accept the changes.
Hardforks should also work similarly. New rules would be written as a new circuit. Clients would need to update to use them the same way they need to update their BU or ABC nodes when there's a breaking change. The circuit would embed the old circuit so it can validate the transition block. Because each proof is witnessing the validity of prior proofs then after a rule transition the circuit can remove the previous circuit code.
> Would not a new node coming online need to trust that the details of proof system they are using to verify the proofs is the correct one?
Yes, in the same way they need to trust BU or ABC to implement the code correctly. The proof system itself can be written with a system like [Snarky](https://github.com/o1-labs/snarky) that allows people to review the code and then deterministically compile that into the final circuit.
> Is this not then like a proof-of-github analogy to the proof-of-work (SPV) security you'd get by trusting the UTXO commitment?
I'm not sure exactly what you mean. You trust the code you run and you trust that the security assumptions of ECC are upheld. You do not need to rely on crypto-economics to determine the validity, only which state is the most recent.
> This is a cool idea with cool math behind it, but unless I'm mistaken it seems to be a lot more valuable in a future where the protocol is truly "locked down" than in a future where the rules are being changed every 6 months.
There's still a lot to be investigated but I don't think that needs to be the case. I think this model could handle changes just as well as any full node.
Thanks again for the feedback!
Peter__RPeter Rizun - Bitcoin Researcher & Editor of L2 months ago
Thanks for the response.
OK I've given it more thought and I agree that this could be quite valuable even if the consensus rules are in flux.
> This is scaling Bitcoin (BCH) to infinity!
If this works out the way I think it can it helps a ton, but we'll still have concerns in the future. I think the size of the state itself is going to be the next challenge. We'll have to see how this works with UTXO sharding. With this + UTXO sharding I think we could have more capacity than any existing or proposed payment network.
The other scaling related issue is the propagation of large blocks. zk could probably be used to compress that data as well. It means we can avoid propagation delays from large blocks with poorly propagated transactions, as was the case with the SV block that clogged the network. However there might be a problem with a miner withholding transactions if we're not careful...
Yeah, with a system like I outline miners would only need to share the latest block header + proof and then other miners could mine on top of it knowing that it's the valid state. So they'd have constant-time/size block propagation and validation. Large blocks/chains wouldn't be an issue in that regard.
> However there might be a problem with a miner withholding transactions if we're not careful...
Yeah these types of things are still unknown. This kind of system would be a large undertaking in research and development.
Miners are incentivized to publish full blocks so that classic-style full nodes will still sync but as the community adopted proof-clients, classic nodes would become more and more expensive and would be decline in use. Eventually the entire network could become state-only and miners would no longer need to publish block contents.
It would be a hardforking change, so they'd need the same amount of support as any other hardfork I think.
Theoretically there's no we need transactions for anything other than validating the state, so if everybody is satisfied with using zk-proofs for that then there's no problem. I imagine many people would feel weird about that so I think that's only something that people would want in the long-term.
The issue isn't the storage, it's the block propagation. For as long as there are still nodes that validate by executing transactions, miners can only make blocks as big as they can propagate to those nodes. Or they need to decide that there are not enough of those nodes to make it worth keeping that bottleneck and fork away from them.
Once they do that there won't be any block data to store. Transactions will still flood the p2p protocol to make their way to miners, but miners never need to publish which txs they included in a block as long as the state transition is valid.
I thought the problems with block propagation were largely solved with graphene anyways? Doesn't graphene effectively get us pretty close to only needing to share transactions once? And can't we also now build blocks over the entire 10 minute inter-block window?
I'm trying to understand what extra data we're cutting out by going to your scheme. We still have to relay all the transactions to all the miners, right? Or are you suggesting we just start relaying transactions to a single miner and from there miners share proofs only?
I should have said "block validation". What's extra data reduced in this scheme is the transactions themselves. Any proof-nodes would not need them, only legacy nodes.
> Or are you suggesting we just start relaying transactions to a single miner and from there miners share proofs only?
Transactions should still be sent to many miners to provide the same censorship-resistance we have today. But miners could share proofs only between each other; they don't need to send each other transaction data at all. It's only needed to not fork off legacy nodes.
If transactions are being sent to as many miners as possible to provide censorship resistance I don't know if there's as much savings from sharing only proofs over graphene since most miners would have most transactions anyways. I guess it depends on the size of the proofs. I await the prototypes for sure.
Feel free to reach out in PM if you want help with any of this from a coding or simulation standpoint.
The difference is that txs don't need to get to every miner/full node. They only need to get to enough miners to ensure that one will include your transaction in a block and they don't need to get to any proof-aware full nodes at all.
Thanks for the offer.
The importance of the average person being able to the validate the chain state is one of the few things I agreed with Core about so I'm really hopeful this is a way we can have that while also supporting massive adoption.
1. I meant Core didn't *want* to scale on-chain and famously used the slogan "don't trust, verify" to support its 2nd layer roadmap. That was an obvious excuse because they didn't even try to explore the sea of possibilities before locking BTC's fate with the LN.
2. Didn't V Buterin coauthored a paper which also showed a pretty good compromise already?
> 2. Didn't V Buterin coauthored a paper which also showed a pretty good compromise already?
I'm not sure which paper you mean, but probably. I don't know of any other proposals that have full state validation with sub-SPV-level resources though.
> Are there ways for patent trolls to stunt this being implemented on the BCH chain?
No. The code can be written and run anywhere in the world. And it's not a protocol change. U can do it right now for any blockchain that exists, BTC, BCH and BSV included, but u'll have to write a lot of code
I don't know of any reason it would be more susceptible to patent trolling than any other tech Bitcoin or other cryptos use. I am not a lawyer though.
If this system works then it will get implemented on some chain somewhere. Patent trolls can't stop that even if they're legally entitled to in some places.
No they're not. I can add invalid transactions (from smelting's point of view) into the blockchain. You're not doing the zk calculations in Bitcoin script, data is added to OP_RETURN and interpreted by nodes.
Ok, maybe miner validated is the wrong term.. But as I understood it your SPV wallet will simply not see invalid transactions as there's a compact zksnark proof in every tx so even SPV wallets can validate it with only knowledge of the tx inputs.
So if your wallet doesn't see any fake coins because it can independently see that they're invalid for me that's practically as good as "miner validated/enforced"
The client still validates the proofs, they are not validated by miners. The property that RS has over other proposals is that the ZKPs allow clients to not have to traverse the entire graph of the tokens transactions history to validate it, so they don't have to mine all that data out of OP_RETURNs spread throughout a bunch of blocks over time.
This is really the same fundamental principal behind my ZK-Sync proposal. I've been hesitant to start writing and when RS came out I thought for sure they would propose the same thing as me and I wouldn't have to write this article. But for some reason that gap was never bridged afaik.
Yeah I've been interested in using ZK proofs for scaling for a long time but only in the past couple of months have pieced together how it would work without making changes to Bitcoin, since I knew that would be a non-starter.
This is indeed very cool. So what exactly can we use this for?
It's not enough to only sync the proof, since we need to know the balance of each address, so the UTXO is necessary. Can we use this to ensure we're getting the correct UTXO somehow? Then it's very similar to UTXO commitments and should scale the same (albeit more secure)?
We would use it to create light-weight nodes that have the same security as fully-validating nodes.
Re: getting balances, from the article:
> Clients connect to this network, download and validate the most worked state hash, and are now fully synced and have an “authenticated state” hash with its integrity verified. Clients that want a piece of the state could request it from state-holding nodes on the network. A mechanism would transmit state fragments and proofs-of-inclusion to clients on demand.
The hash of the state could be the same ECMH hash as UTXO commitments. The proposals compliment each other. After you verify that the state hash is valid you can ask nodes for the entries you care about and they can provide that part of the state and a proof that it's committed to in the hash.
This is similar to how SPV nodes sync from block headers and then ask other nodes for transactions and receive a proof of inclusion in the header's merkle root commitment.
Oh I glossed over the part where you can request proofs-of-inclusions for fragments. I'm not sure I see how exactly they would be constructed? An arbitrary fragment from a block seems hard to encode in a single transition function no?
Assuming all theory checks out this is a massive improvement! Well done.
> An arbitrary fragment from a block seems hard to encode in a single transition function no?
I missed this part before. What the state would need to encode is most the UTXO set data, plus cumulative work and height. UTXO set data can already be encoded easily. This is done for UTXO commitments and we could use the same encoding. If we store that state as merkle tree of utxo commitments then state could just be the ECMH of walking the tree.
Yeah I was trying to stay relatively concise so I didn't get into the specifics of how the state would be structured. We would need to figure out the best way to represent the state to allow for efficient delta-updates between blocks and efficiently sending clients state they want in a way that allows inclusion proofs. Having a way to do this is a must for the scheme but I think it should be manageable.
Coda uses a merkle tree for their state but that might not be the best way for Bitcoin's state.
If this idea gains traction I'll make some more posts that dive deeper into all the mechanics that I have in mind. I'm also working on a prototype so I'll have to resolve the issue sooner than later.
This is quite impressive. I mean it's pretty obvious that with gigabyte blocks we would need UTXO commitments (i.e. dump of the state), but I though we would need to kind of trust them to be valid (like we trust genesis hash to be valid), having a way to prove that they are valid - that is awesome!
Hopefully we can see something like that it in BCH
Yeah, UTXO commitments are useful for allowing clients to find the state without processing blocks, but the commitments only have SPV-level security. Generally SPV is "safe enough" for most things but as we've seen with bad actors like nChain who think miners make the rules, we can't be certain a mining cartel won't try to do things like steal funds.
If we can be secure against this without compromising adoption we should definitely try it IMO.
> with bad actors like nChain
on that note, it's funny to see this now since csw has been going on on twitter about (paraphrasing:) "showing the world how to do spv right, with no fraud proofs, in 2019". i did a double take reading this and wondered if you've shared these ideas with nchain?
> bad actors like nChain who think miners make the rules
Stop. For the longest time this sub's main mantra was that "Miners make the rules because Nakamoto Consensus". When someone like myself would say that's not correct, we'd be downvoted to the abyss.
Random thought, for some reason I think I'm gonna be downvoted hard again, just like I was being downvoted for saying the same thing you are saying now.
Btw, miners can't steal funds even with 90% of hashrate, funds are secured cryptographically. They can only prevent you from spending the money.
They can steal funds from anyone who sells something in exchange for the crypto by reversing the transaction. It's just chargebacks on a massive scale.
Also I wasn't aware subs had mantras. I thought they had users. Maybe you're confused on how Reddit works.
> For the longest time this sub's main mantra was that "Miners make the rules because Nakamoto Consensus". When someone like myself would say that's not correct, we'd be downvoted to the abyss.
I agree. That was one of the most frustrating aspects of this sub. And I think people just let them get away with it because it was seen as piling on Core.
Fortunately I think most of those people left for BSV. Plus I don't think any of the technical people in this community believed that anyway. It was always acknowledged in the technical community that there are tradeoffs to scaling and unless we can make progress towards removing bottlenecks and making use of new technology then the system may lose properties that we find valuable (like trustlessness or decentralization).
The debate with Core was really over 1) Whether the network can handle the blocksizes we are likely to see in the near term (it obviously can) 2) Whether we should be trying to optimize and deploy new tech to scale and retain the properties we want or whether we should just waive the white flag and push everything offchain (which we know doesn't have the properties we want).
> I agree. That was one of the most frustrating aspects of this sub. And I think people just let them get away with it because it was seen as piling on Core.
I think it was a counter argument to the "users need to be running full nodes" narrative.
The point of this sub was: if miners take over, you're quite helpless with your full node anyway. It's the people with a financial incentive to run full nodes that have power (exchanges, merchants, block explorers etc.).
It's not like full nodes are bad, but some people running raspberry pis are less important than keeping the TX fees low.
Some people indeed concluded that miners rule the network, but it's not that simple.
> For the longest time this sub's main mantra was that "Miners make the rules because Nakamoto Consensus".
I never said that. I was hesitant to participate in the community because I disagreed with that mentality. I started to participate when I realized that mentality was mostly from a group of miner-supremacists that fundamentally misunderstand Bitcoin, and that there were a lot of reasonable people here that believed in the importance of real NC (including validation) but that they were torn between adoption and widely-available validation.
This philosophical split + the in-your-face threat from nChain that's easy to use as an example is what prompted me to finally start working toward this idea.
> Btw, miners can't steal funds even with 90% of hashrate, funds are secured cryptographically. They can only prevent you from spending the money.
They can if and only if people are not validating the state. That's the entire point of the article. An SPV wallet has no idea if signatures were respected when calculating the state in the blocks they see. Full validation nodes do. Most people don't want to expend the resources required for a fully validating node so they use SPV. That's why I made this proposal.