We can actually scale asset transfer transactions on ethereum by a huge amount, without using layer 2’s that introduce liveness assumptions (eg. channels, plasma), by using ZK-SNARKs to mass-validate transactions. Here i…
VeChain is already doing batched transactions (MMP) at 500 tx/s on their mainnet.
ethereumfrenzy1 - 2 years account age. 200 - 1000 comment karma.1 month ago
Can somebody summarize what it says ?
I understand transactions could be batched together more efficiently.
Can this work with transactions going to many different adressed, or just if those go ti the same contract / address ?
No, they would not. SNARKs are used here for scalability, not privacy. Unfortunately adding strong privacy into this scheme would likely reduce gas efficiency by ~8x. Though that may be not that bad, given that strong privacy has efficiency tradeoffs around that level no matter what you do.
I'm sorry if this is a dumb question but is that because you would need to add more to the blockchain to implement strong privacy and gas usage is proportional to the size of the data written to the blockchain?
> I'm sorry if this is a dumb question but is that because you would need to add more to the blockchain to implement strong privacy and gas usage is proportional to the size of the data written to the blockchain?
It's because for strong privacy, the indices and transaction amounts would need to be "encrypted", and an encrypted value would need to be 32 bytes. Though maybe it's possible to make it be 32 bytes for the entire data, in which case the complexity would be pushed into the SNARK proof computation.
It's a very old idea, iirc older than bitcoin (only theoretical then). The main problem was always proving performance and trusted setup. Public data arguably makes multiparty trusted setup not a big issue. All currently practical solutions are also not quantum secure.
It also makes sharding pointless, as all scaling problems reduce to relatively trivial decentralized storage with many existing solutions. I hope this doesn't become yet another pivot (like beacon chain from hybrid PoS)...
I would argue proving performance is not a big deal in the long run; there have been large gains recently now that SNARK/STARKs are The Big New Thing, and if there's usage we can outsource proof generation to the mining industry and their GPU farms.
And it's not true that it "reduces to decentralized storage"; it reduces to _scalable validation of data availability_, which is still a hard problem and requires some kind of "sharded" setup to solve.
Also, this is not a pivot, it's a layer 2 along with all the other layer 2's.
>I would argue proving performance is not a big deal in the long run
Do you know how fast recursive zk-snark validation is now, on the bls12 curve (the one zcash switches to)? Is it reasonably practical now?
>And it's not true that it "reduces to decentralized storage"; it reduces to scalable validation of data availability, which is still a hard problem
Zk-snarks allow every shard validator to prove that he indeed does have the entire required state. Then the security assumption becomes that for some n, n validators aren't going to cooperate and hide the data from the public. Validators can be shuffled. What's the hard problem in this design?
Zk-snarks also allow using rateless erasure codes for much better transmission, as every part can be proven to be correct. Unless it's in the multiple TB I don't see a reason for sharding at all.
> Unless it's in the multiple TB I don't see a reason for sharding at all.
Currently the ethereum blockchain's data rate is ~25 kb per 15 sec, or ~50 GB per year. If we want to increase capacity by a factor of 1000, that becomes 50 TB per year. With the optimizations described here, that could go down to ~5 TB, but then if we have that much scalability we may as well include strong privacy support, which would push the numbers back up to ~50 TB. So yes, we are interested in literally scaling the blockchain to multiple terabytes.
> Then the security assumption becomes that for some n, n validators aren't going to cooperate and hide the data from the public. Validators can be shuffled. What's the hard problem in this design?
This requires every validator to actually have all the data. The design you *could* have is to require randomly sampled subsets of validators to prove ownership of different subsets of data, but then that *is* sharding.
Hey, seb400\_8, just a quick heads-up:
**finaly** is actually spelled **finally**. You can remember it by **two ls**.
Have a nice day!
^^^^The ^^^^parent ^^^^commenter ^^^^can ^^^^reply ^^^^with ^^^^'delete' ^^^^to ^^^^delete ^^^^this ^^^^comment.