“A second argument in favor of layer 2 solutions, one that does not depend on speed of anticipated technical development: sometimes there are inevitable tradeoffs, with no single globally optimal solution.”
RT @proofstake: So it will continue to be true that a balance between layer 1 and layer 2 improvements is needed to continue improving scalability, privacy and versatility, though layer 2 will continue to take up a larger and larger share of the innovation over time https://t.co/uk9O5GHmF8
ETH shills like to call any blog post by Vitalik explaining what everybody knows "master class".
This is how they build a pretentious image of knowledge on top of shallow ideas and insecure bullshit such as Ethereum. https://t.co/bLtvECi3xv
Vitalik's TLDR commentary on blockchain Layer 1 // Layer 2:
"as blockchains become more and more mature, layer 1 will necessarily stabilize, and layer 2 will take on more and more of the burden of ongoing innovation and change"
Full post https://t.co/dUmGmWxzxE@VitalikButerin
So it will continue to be true that a balance between layer 1 and layer 2 improvements is needed to continue improving scalability, privacy and versatility, though layer 2 will continue to take up a larger and larger share of the innovation over time.
>actively competing on all of these improvements; it would simply provide a stable platform for the layer 2 innovation to happen on top.
>perhaps,but we must not jump onto conclusions yet i think we should for their layer 2's official release.
All these problems are a consequence of centralized protocol development.
Gas limit is the single biggest source of the problems he listed, as weird as it sounds. It forces people to wait for a fork for real improvements.
Ethereum appears to be slowly falling into the same centrally planned hole of stagnation that Bitcoin Core fell into, based on the idea that users are dumb and would hurt themselves without arbitrary limits.
Even if the gas limit is necessary during the PoW phase as a way to prevent mining centralization, that's not going to be true for PoS as blocks too big just won't get finalized. Stakers are also, in opposition to miners, actually invested in the future of ethereum.
Now, Zcash and other blockchains are moving toward BLS-12-381, and Ethereum would need to fork again to catch up
Without the gas limit verification could be implemented in EVM as a new contract, but on nodes actually executed as much faster compiled code that's functionally equivalent. Block generators would know that the theoretical gas cost is irrelevant and care only about real execution time.
Problems with sharding are also a consequence of an arbitrary design decision - that block generators must be able to run on really weak hardware. At the same time they are expected to lock 32 ethers, possibly worth way, way more than a powerful pc in the future. What's the logic here?
Bitcoin has the argument that everyone should be able to verify everything on one pc. Sharding throws that away already.
A desktop cpu - Ryzen Threadripper 2990WX has 64 threads (32 cores). Add a NVMe ssd and it means one pc can execute 64 transactions in parallel. Full PoS eliminates exponential time distribution so one block can take >10x longer to verify. That gives ~500M tx/day, no sharding needed, synchronization becomes a relatively trivial problem of a local mutex.
Given how fast core count started increasing it's likely 1024 core desktop cpus are going to appear in a decade. At that point the error in assuming that one node should be treated as single-threaded will become apparent.
This also would be solved by itself without the gas limit - as block generators could develop and use faster parallel validation without asking devs for permission.
It won't be allowed - because of the unspoken assumption that stakers are dumb and will throw away decentralization, the real source of the value of the system, in dumb pursuit of the short-term throughput.
The biggest risk I see is for block producers and validators to need so much computing power, that they can be easily identified as such on the network and their hardware which is not easily replaced, to be confiscated. **Identification ties into the hierarchical institutions like the state and leads to central coordination and manipulation.**
Processors (block producers) should not have any identifiable way to tie their actions to their government issues identities, because that by extension puts the control back in the hands of these states.
Stake doesn't tie itself to identities, unless they bought or sold it on a centralised exchange. Stake should therefore also not be tied to rewards that come from validating blocks which could be considered to contain illegal transaction in the country where those rewards are sold on any exchange with a KYC process.
>The biggest risk I see is for block producers and validators to need so much computing power, that they can be easily identified as such on the network and their hardware which is not easily replaced, to be confiscated
But that's exactly this:
>assumption that stakers are dumb and will throw away decentralization, the real source of the value of the system, in dumb pursuit of the short-term throughput.
Why? It's an obvious Schelling point to limit throughput to a high-end, single pc. There's no need for central planning to achieve that.
It's the same assumption that makes the world increasingly authoritarian, yet at the same time Switzerland with its direct democracy is the most stable and possibly the most free (at worst, second) country in the whole world.
It turns out if you allow normal people to take direct responsibility they don't actually vote like idiots.
No offense to the Swiss, but I would expect average ethereum staker to be way more informed than average Swiss voter.
Hey, alsomahler, just a quick heads-up:
**therefor** is actually spelled **therefore**. You can remember it by **ends with -fore**.
Have a nice day!
^^^^The ^^^^parent ^^^^commenter ^^^^can ^^^^reply ^^^^with ^^^^'delete' ^^^^to ^^^^delete ^^^^this ^^^^comment.
If you are new to our subreddit I will try to summarize how this relates to Raiden.
layer 2 (aka L2) scaling solution for Ethereum. In other words it uses Ethereum as Layer 1 (aka L1)
a payment channel network
Difference between state channels and payment channels:
state channels are generalized payment channels. So, while payment channels can only keep track of state of payments between 2 parties, state channels can track state of anything you want between two parties.
or we can steal answer from stackexchange. "A payment channel is essentially a subset of a state channel. (The state being tracked is how much currency is owed to each participant.)"
payment channels are simpler than state channels and payment channels are not easy to implement.
State channels are still heavily under research and I don't know of anyone being really close to a finished product (maybe someone can help me with this one by pointing out some project that is). Payment channels are still heavily researched, but there are some payment channel networks in crypto space that are available and working. Most popular payment channel network currently is Lightning Network on Bitcoin.
Ok, now it might be little easier to understand an article :).
My personal top 2 quotes are:
"However, in the longer term, I do think that as blockchains become more and more mature, layer 1 will necessarily stabilize, and layer 2 will take on more and more of the burden of ongoing innovation and change."
I share this opinion ever since I learned that you can build layers on top of blockchains. It makes sense because every tech you know today is built in layers. You just can't achieve max potential in a single layer architecture. I would just like to add that, I personally don't see development of blockchain tech stopping at second layer. In future I see L3, L4 .... Ln.
"Additionally, layer 1 can still improve on reducing latency; if layer 1 is slow, the only strategy for achieving very low latency is state channels, which often have high capital requirements and can be difficult to generalize. State channels will always beat layer 1 blockchains in latency as state channels require only a single network message, but in those cases where state channels do not work well, layer 1 blockchains can still come closer than they do today."
VB goes straight to State channels in his article and it is completely understandable (he is focusing on the end result). I personally don't believe tech will be able to go for State channels directly. For every technology there is a step-by-step progress. That is why I think first payment channel tech needs to be well established and researched and only then focus will switch towards successful implementation of State channel tech.
Hope my little explanation helps someone better understand this article and importance of it for Raiden. My personal biggest takeaway from this article I would sum up something like this: "We are not crazy, L2 scaling solutions are the future of blockchain tech".
No. For many years already, no upgrades requiring hard forks have even been considered for BTC. Ethereum's road map still includes hard forking to PoS and sharding, and hard forking again later to upgrade to more advanced versions of them.
BTC's layer 1 is already pretty much set in stone, and it's far less complex than Ethereum's, even though the latter is going to get huge changes done to it that further increase its complexity.
It seems you are either trying to mislead people by purpose or just don't know what the heck you are talking about.
Eh, this wasn't about hard forks or soft forks or whatever. The article is about complexity at different layers of the protocol. From what I remember, Vitalik used to be a strong proponent of "fat protocols", where many of the things we now consider "layer 2" would be part of the base protocol, and the base protocol would be capable of handling thousands or millions of transactions per second.
Now, he seems to be acknowledging that "layer 2" scaling is a better approach.
Which has been BTC's approach to scaling, as well. Different complexities in the base protocol, obviously, but it seems Vitalik is converging on some of the same concepts that went into defending and pushing for Segwit and Lightning (i.e. expanding the functionality of the base protocol to enable "layer 2" scaling solutions).
Yeah, I think I remember him saying that a plasma chain shouldn’t hold more value than the main chain. That would mean its important for the main chain to scale enough to maintain a higher proportion of value than any of the layer 2 solutions it supports
Of course I read it.
> In the long run, layer 1 would not be actively competing on all of these improvements; it would simply provide a stable platform for the layer 2 innovation to happen on top.
This is nearly verbatim one of the arguments used by people supporting Segwit/Lightning on Bitcoin, for example.
Actually, what's saying Vitalik is quite necessary even if we wanted it to be otherwise: you can have any number of quick improvement attempts on layer 2; you can't possibly have them all tested at the same time on layer 1 without multiplying the risks of each of these improvement attempts and very quickly getting a constantly unstable layer 1 with no actual production value.
In the end, all of these improvement attempts will benefit layer 1 once proven useful and stable enough (when successful) or bring better clarity about the issues they faced (when failed). Though the undetermined quantity of improvement attempts which can be developped at the same time is a key of this situation where it's impossible to develop all of them at the same time on layer 1 and keep it productive (with controlled risks).
> Actually, what's saying Vitalik is quite necessary even if we wanted it to be otherwise: you can have any number of quick improvement attempts on layer 2; you can't possibly have them all tested at the same time on layer 1 without multiplying the risks of each of these improvement attempts and very quickly getting a constantly unstable layer 1 with no actual production value.
You're making my point for me. All of those supporting arguments are reasons why Bitcoin is super conservative about changes to the protocol.
We're not disagreeing on this point.
> In the end, all of these improvement attempts will benefit layer 1 once proven useful and stable enough (when successful) or bring better clarity about the issues they faced (when failed). Though the undetermined quantity of improvement attempts which can be developped at the same time is a key of this situation where it's impossible to develop all of them at the same time on layer 1 and keep it productive (with controlled risks).
Now, this I disagree with. I've seen 0 evidence or statements that layer 2 scaling solutions will ever be added to layer 1. If you have a citation, that would be great.
Oh noes, a Twitter-length reply didn't accurately reflect the nuance of an entire article! Who would have thought such a thing could happen! /s
More seriously though... Duh? It still says exactly what I said it says, and nothing I've said is an inaccurate reflection of the content of the article.
Point out where what I've said contradicts the article... While also keeping in mind the "nuance" aspect.
Don't worry, I'll wait.
I quoted a section of the text... what would make you think I would stop reading there, as well? The dude asked me if I read the article. I remembered that line, and went back and found it, because it's further illustrating my point.
I'm not saying that BTC or Ethereum have the same complexity in their protocol, or that they should. I'm more commenting on the fact that it seems Vitalik has come to agree with things he used to pretty strongly disagree with. He used to be a pretty big proponent of a "fat protocol", where all the functionality goes into the base layer. Remember Whisper and Swarm?
Now he seems to be coming to a more moderate position.