The launch of Bitcoin in 2008 created a sensational wave in the financial market. James A. Donald was the first to comment and point out the problem of scalability. He said, “the way I understand your proposal, it does not seem to scale to the required size”. Now, Bitcoin is a decade old and yet …
Guys I have read all the replies, there are mixed reviews on the Lightning network. Some like it while some think that LN alone cannot solve the Bitcoin's scalability. Let's wait and see where LN goes, Thank you for sharing your views. PEACE!
Hope you guys won't get to mad, but I'm sorry to break it to you, LN is absolutely not the ultimate solution to scaling bitcoin. Why? Drum roll...
LN doesn't scale!
That's right. LN, on it's own, cannot scale to planetary level, including Internet-of-Things and such. Sorry, it just doesn't.
The ultimate scaling solution is a mix-and-match of very many scaling technologies. This includes channel factories, sidechains, drive chains, moving private keys off chain with solutions like OpenDimes, and other stuff which has yet to be invented/proposed/coded.
All of this combined, and only all of this combined, will scale bitcoin. OP is just a sensationalized click bait article, LN awesomeness notwithstanding.
> Sorry, it just doesn't.
Wow, what a smart reason you have for your opinions!
> channel factories
Channel factories will be part of lightning...
> moving private keys off chain with solutions like OpenDimes,
No. Opendimes have no divisibility, so they are terrible at being a medium of exchange.
Drivechains are just a miner-controlled block size increase, due to the need for miners to "vote" on the correct history of the sidechain. Sidechains are not scaling solutions, because their transaction rate increase is bought at the cost of global bandwidth use and in particular in the critical miner-to-miner propagation path.
OpenDimes are no trust-minimized, as you are trusting that the OpenDime being given to you is a genuine one from the manufacturer and not some knockoff which leaks its private keys. And the only way to check that the manufacturer is not cheating you (publishing one "open" design and secretly sending a design they can hack when you order from them) is to do a cut-and-choose, i.e. order multiple copies from the manufacturer and open all of them and validate the manufactured design except one that you select at random and keep to use, which is a perfectly fine technique for cryptography but is expensive for real-world use.
Trust-minimizing techniques like channel factories and channels are the way to go without sacrificing your financial security. A core observation here is that they require n-of-n multisignatures, i.e. everyone signs, so that a participant cannot be "voted off" and have its money stolen, thus trust in other participants is reduced.
Anyone have a good succint link that describes how a lightning node sees the topology of the network.. what state a node needs to keep track of, and for how long.. and a explanation/argument why this will scale? Preferably an article or video with some technical details.. but something that could be digested in 30-60 minutes?
Does anyone also have a good link discussing the current state of the art with a Lightning light-wallet technology like Neutrino?
Current implementations see the entire network of *public* channels and the nodes which they belong to. They do not see any *private* channels, nor do they see any nodes which have only private channels and no public ones. That is, if information about a node is broadcast for which no channels are known about, the information is ignored. Channel information is first broadcast, followed by information about the nodes.
The recommended time for keeping this information is *2 weeks*. The software should clean up and drop any information for which it has not received any updates in 2 weeks. For that reason, a node should broadcast regular updates at this interval about itself and channels. Broadcasts can be more often if information about a node or channel changes (ie, fees charged for routing), but nodes can also rate-limit the frequency of updates to limit spam/DDoS attempts, and the method of broadcasting gossip is rate-limited for each node on a 60-second timer, so that duplicate broadcasts are not made if receiving the same information from multiple other peers.
The network forms an undirected multi-graph where the vertices are the lightning nodes and the edges are the channels between them. The edges, however, have 2 different states depending on the direction a payment may be made through them - as each party to a channel can set the fees individually for what they accept to make a payment over that channel. It could instead be considered that the network forms a quiver (a multi-digraph), where each channel represents two edges, each pointing in the opposite direction.
A third way to consider the network, although not particularly useful for routing payments, is that it forms a DAG (directed acyclic graph). This can be considered as every channel has a well-defined "start" and an "end", defined by the lexicographical ordering of the nodes which the channel belongs to. If all nodes are held in lexicographical order, and each edge points from the lexicographically lesser node to the higher node, then there cannot be a case where an edge forms a cycle, under the assumption (not explicit in the BOLTs, but can be implied) that both ends of a channel cannot be the same node.
The primary cause for concern in scaling this is bandwidth, which grows as the size of the network grows. Storage requirements may become cause for concern later down the line, but it is likely that bandwidth will remain the primary obstacle.
A few methods to improve bandwidth usage are in development or discussion and will make it to the next versions of the specification. Currently, implementations are using a naive gossip forwarding strategy where each piece of information received is individually forwarded to other peers unless they opt-out of receiving it. The first improvement to this will be an inventory based approach similar to what Bitcoin's P2P network uses - where nodes only broadcast a summary of the information (specifically, the id of each node/channel) instead of full gossip information, and then each peer requests the full gossip information for each id which it doesn't already know about.
Another improvement is related to Erlay which is being developed for Bitcoin. Instead of gossiping about every piece of information, the information is forwarded probabilistically, reducing bandwidth, and then each node will use set reconciliation to figure out which parts of the gossip it needs information about. This will likely use the same technologies as Bitcoin (eg, minisketch).
The bigger picture is that each node will eventually not need to hold a complete network map, but instead will hold a reduced topology, and will delegate parts of the route finding to other nodes without sacrificing much or any privacy. You will need to check the mailing list for discussions on JIT-routing, trampolines, rendevouz-routing etc, as these are still in the discussion phase and not concrete proposals yet.
WRT Neutrino, this can already be used, but LN proponents are largely under the opinion that you should be running a full node and validating your own transactions in order to have the right guarantees and no loss of privacy. The concern is that if approaches like Neutrino become "too easy" to use, then there will be a lack of enthusiasm to run full nodes and large portions of the bitcoin network will be based on faulty assumptions about who is responsible for validating transactions.
Not sure what you mean by routing trees, but there was a proposal for a hub/spoke type layout for LN called Flare. This has largely been ignored as it elevates some nodes (hubs) to a "privileged" status, and would be harmful to maintaining a permission-less, equal-access, censorship-resistant and centralization resilient network.
There was a discussion in March/April 2019 on the lightning-dev mailing list about instead using a spatial index to attempt to map your node to a virtual location in the network. Users would select how much of the network they wish to index based on their lexicographical distance to other nodes. (Taking the public key or PKH as the lexicographical space). There was a proposal for using binary space partitioning and another for using a quadtree index.
The idea in both cases would be that you only keep information about nodes and channels which are lexicographically close to your own. if you need to find a route to node N, you would search through known nodes in your local route map to find the nodes which are lexicographically closest to N, as they would be more likely to know about N, or more likely to know somebody else who knows N, and so an efficient route could be found without having to randomly query the network or rely on "full map" nodes, which would be the equivalent of privacy invasive name services.
This is similar to how existing DHTs work. (see Chord, Kademlia etc). These scale quite well, but still have max route lengths of O (log n), which would be route lengths of potentially 20 hops if there were a billion nodes. LN supports a max hops of 20, so it isn't entirely out of the question, but my intuition suggests that if routes are commonly 20 hops long, the network would be pretty slow and inefficient. Far from the fast and reliable network we're aiming to make it.
> hub/spoke type layout for LN called Flare. This has largely been ignored as it elevates some nodes (hubs) to a "privileged" status
> The idea in both cases would be that you only keep information about nodes and channels which are lexicographically close to your own.
Interesting, so you would basically shard the network information this way and then ask nodes lexicographically close to your destination node if they know how to find it? If they don't, they could ask a potentially even closer node (since they should know closer nodes). Would these nodes store network information around each node they're lexicographically close to as well? I'd expect them to have to, since otherwise simply being aware of the destination node isn't helpful to constructing a route.
Seems like it would be a bit hit and miss, since the lexicography would have nothing to do with network shape.
> max route lengths of O (log n)
That doesn't sound so bad. I'd wonder what the average route lengths are. Even with a billion nodes, a well-connected network shouldn't need much more than 10 hops on average tho (considering a network where 20% of the nodes have 10 channels and the other 80% have 2 channels). Change that to hubs with 100 channels each and that drops down to 5 hops on average. So 20 hop paths really shouldn't be necessary in the vast majority of cases. even if everyone in the world has a few channels.
> Not sure what you mean by routing trees
Well, what do we do to route packets in the internet? Each router has a routing table, and when you connect those routing tables together, they form a routing tree. Of course DNS and IP are both centralized systems. But there's a reason address routing works so well - unlike lexigographical proximity, IP addresses that are similar to each other are by design closer to each other in the network, because that's how IP addresses are given out.
It seems like this would be relatively ideal, since you could use the destination address alone (with public channel data) to determine an acceptable route. If there were multiple tables and each channel were assigned multiple addresses, shortcuts could even be discovered that could allow near-shortest routes to be constructed.
So why hasn't anyone modeled LN routing on the IP routing we know and love, just with some decentralization added in? Are there fundamental problems with address-routing I'm not aware of?
> Would these nodes store network information around each node they're lexicographically close to as well? I'd expect them to have to, since otherwise simply being aware of the destination node isn't helpful to constructing a route
Yes, the idea is everyone does it - or at least, those who do it signal so as an optional feature so that you know who is worth asking.
> Seems like it would be a bit hit and miss, since the lexicography would have nothing to do with network shape.
The network is initially randomly distributed, but if these proposals were used, the idea would also be to change the way autopilot works to prioritize opening channels to nodes which are nearer to yourself, and de-prioritize those at distance, but whilst maintaining a healthy amount of randomness. If enough participants follow the behaviours, we can influence the network to grow in a certain way which is beneficial for this kind of route finding.
> Well, what do we do to route packets in the internet? Each router has a routing table, and when you connect those routing tables together, they form a routing tree. Of course DNS and IP are both centralized systems. But there's a reason address routing works so well - unlike lexigographical proximity, IP addresses that are similar to each other are by design closer to each other in the network, because that's how IP addresses are given out.
The problem is of course, IP address ranges are centrally assigned. The backbone of internet routing is also handled by just a handful of big companies, who essentially have control over the BGP routing tables. It isn't something you or I could participate in.
In the proposal to use a quadtree on the mailing list, it was mentioned that it can be used to map the lexicographical space of PKHs into geographical locations on earth. By using a space filling curve over the surface of a sphere, you can have a distance measurement which corresponds to the degrees of a vertices in a quadtree. Essentially, the 4 children of each vertex in the quadtree represent a complex distance from the centre of the parent. (eg, +1, -i, 1+i, 1-i or -1, -1+i, -1-i, +i, depending on the orientation of the triangle to the reference (root)). The idea would be to brute force a PKH which approximates your location, which should be possible with ~16 bits, which would be feasible even on a smartphone.
If this kind of approach was taken, each user could store at least the entire routemap for their local town/city, which would mean any local payments they are making from their smartphones for example, would be optimal because they would be very likely already know a route to the destination without querying the network. A payment over the other side of the world might take a bit more querying - but then those kinds of payments don't need to be as fast as the ones where you're queueing in a shop or bar.
> change the way autopilot works to prioritize opening channels to nodes which are nearer to yourself
Ah, that does sound like it could work reasonably well. That has a downside tho - nodes would be basically required to make at least one of their channels connected to lexigraphically close nodes, or possibly all of their channels. This would severely limit the use of other channels, for example if people want to make direct channels with someone that isn't lexigraphically close for some reason, finding routes though that channel may be generally infeasible.
> The problem is of course, IP address ranges are centrally assigned.
Ok.. but why can't that process be decentralized? What's preventing that?
For example, consider this method of assigning addresses:
1. When a node creates a new channel
1.a.i. If each channel partners are part of completely separate unconnected networks (which would be the case if a previously unconnected node was making its first channel), then each node counts their own network (either using public channel information or using a chain of size queries) and makes a claim for how big it is.
1.b.ii. Each node verifies the other's claim by traversing the public channel-connection information. This can be done probabilisticly so that liars are likely to be caught, while reducing the actual traversal necessary substantially.
1.b.iii. The node with the bigger network is assigned as the parent.
1.a.iii. The parent gives it an address range to manage, from which the child node chooses an address.
1.a.iv. That child then declares itself new parent for its other channels and the process from 2b repeats for each of those channels (and that whole side of the network). The grandchildren can verify this is to-protocol by using the same probabalistic method used in 1.b.ii.
1.b.i. If each channel partner already has connections to the same network (which should be most of the cases), the one closest to the root is chosen as the new parent if that would bring the other closer to the root. If it wouldn't bring either closer to the root, then no reorg is done.
2. When a node closes a channel, this information should be propagated up the tree so parents know how many descendant they should have.
3. When nodes are caught lying about information, they are charged with punishing someone in the chain that information came from. For example, if the number of descendants a parent has is found to be wrong (probably to a certain threshold degree), the partner who has been given incorrect information can demand the parent (information provider) punish someone for it, that parent can demand one of its children punish someone, etc etc, until someone is punished (eg by channel closing, but more nuanced punishments can be done). This way, there is an economic incentive to be honest and there is a mechanism to punish attackers that try to disrupt the network by lying.
The above method can be extended into an array of trees, each assigning a different arrangement of parents (eg in 1.b.i, nodes who are already a parent in another routing tree would be excluded from being a parent again).
So my question is: am I missing something that fundamentally makes decentralizing address distribution much more difficult than it seems?
> That has a downside tho - nodes would be basically required to make at least one of their channels connected to lexigraphically close nodes, or possibly all of their channels. This would severely limit the use of other channels, for example if people want to make direct channels with someone that isn't lexigraphically close for some reason, finding routes though that channel may be generally infeasible.
If local channels were prioritized too much, it could be a problem.
IMO, the optimal solution would be that your node selects a radius R over which it wishes to receive *all* gossip information. Your node would then open 50% of its channels at a distance 0 < d <= 1R, 25% of its channels at a distance 1R < d <= 2R, 12.5% of its channels at a distance 2R < d <= 3R, and 6.25% of its channels at 3R < d <= 4R etc... A geometric decay of knowledge as distance increases.
You still have knowledge of other nodes at distance because some of the nodes which you know about have channels open to other nodes at those distances (and you know about those nodes because you know about the channels connected to them). The result is that you should know about at least some nodes at every distance, because each node in your local region connects to random remote nodes.
I was a bit too tired last night to digest what you were proposing. I don't think I understand it still. It seems to me like what you are proposing is not really technically feasible because in order for node A to verify that another node B has a bigger network map, it would imply that A necessarily needs to have access to B's network map, which defeats the point, because then they're having to index all of the information anyway.
Ultimately, nobody can verify who has the bigger network map unless they have the entire network map themselves. You effectively have the same kind of problem as Bitcoin - where it is necessary to know of all transactions/blocks in order to know which ones are valid or not. If you happened to connect into the network and were *eclipse attacked*, then nodes could feed you incorrect information about the state of the blockchain. Bitcoin alleviates this problem by making 8 outgoing connections at minimum, and randomly distributed, to ensure that an attacker cannot isolate a node and feed them incorrect information. It only takes 1 honest connection to make the other 7 dishonest connections a failed attack.
In LN, you cannot place any trust in any individual peer, and should assume from the outset that the peer you are connecting to is most likely malicious. Information from a single node must be assumed to be inaccurate. Only by taking multiple sources of information can you gain an accurate view of the network under the assumption that the sources are not collaborating to be unanimous in their inaccuracy.
The way Lightning assigns identifiers to nodes is via their public key, but importantly, all of that information is signed by the private key corresponding to the public key. A node cannot forge information about some other node because it does not have the private key to produce a valid signature. Only each node individually can publish properties about their own node which cannot be modified by other gossiping nodes who forward the information. An attempt to modify would produce an invalid signature, and would result in blacklisting of the node broadcasting it.
IMO, we should be steering clear of anything which suggests "hierarchy" in the network. There should never be anything which allows node B to *allocate a resource* to node A, if the same cannot be done exactly in reverse, where A has the full capacity to allocate the same resources to B (the resource being address ranges). I'm not sure what kind of value you are intending to use for these addresses either - but my intuition tells me that the only way it can be done on a distributed network without centralization, is to have unforgeable addresses in the form of cryptographic hashes or public keys anyway.
Also, in regards to IP - although it does work effectively at scale, it is certainly not immune to attacks. These attacks are often state-level. The great firewall is a good example. The idea of a mesh network is to be able to route around these attacks effortlessly, because it should only take a single good route to make attempts to block other routes a wasted effort.
Hi u/gaguw6628, thanks for tipping u/WittyStick **500** satoshis!
*[^(More info)](https://www.reddit.com/r/lntipbot/wiki/index) ^| [^(Balance)](https://www.reddit.com/message/compose/?to=lntipbot&subject=balance&message=!balance) ^| [^(Deposit)](https://www.reddit.com/message/compose/?to=lntipbot&subject=deposit&message=!deposit 10000) ^| [^(Withdraw)](https://www.reddit.com/message/compose/?to=lntipbot&subject=withdraw&message=!withdraw put_invoice_here) ^| ^(Something wrong? Have a question?) [^(Send me a message)](https://www.reddit.com/message/compose/?to=drmoore718)*
Parts of the network could be scammed because they'll have falsely trusted miners to follow consensus rules, and the miners may have other plans.
A neutrino node doesn't even know if it's on the right chain.
The $50 might have some immediate use as malicious activity by the miners would cause a chain split, and the users of those wallets would be none the wiser until they realize that their currency is rapidly losing purchasing power againt *bitcoin proper*. Savvy users would attempt to exchange their *bitcoin improper* for *bitcoin proper* as quickly as possible and some may even make a profit from arbitrage. The majority of users will end up losing any money they spent on the *bitcoin improper* chain as it would be worthless as the market decides that only *bitcoin proper* is worth using, and the miners' shitfork would be abandoned. They wouldn't be able to recover that money on the *bitcoin proper* chain as the recipients of those funds on the *bitcoin improper* would replay the transactions on *bitcoin proper*.
The users of those wallets which were negatively affected by the chain split will completely lose faith in its developers and will go out in search for a wallet which is not vulnerable to the same attack. Wallets which are designed to work around weak SPV clients will be very unpopular if such attack is ever attempted.
This may actually be a good thing, because it will generate a mass awareness of the necessity of validating ones own transactions and the chain on which they're spent. However, the damage from loss of funds which would occur in the process is probably not worth pursuing this when we could just do things correctly from the beginning and ship a full node with every lightning wallet.
Thank you for this. It is much appreciated. I will research further from here. It is difficult to commit one's faith in a technology when there is much complexity. I would like to convince myself either way.