Randomness exists all around us. Concepts of luck, probability and fate have all historically been intertwined with the concept of randomness. Everything that humans don’t understand or can’t predict tends to be classified as such. We’re also physically engulfed in a sea of randomness. It exists everywhere - from the movement of clouds to behaviors in particles and waves.

+

And yet, for all its familiarity and abundance, humans have a hard time translating the randomness around us into something that computers can use. When we talk about randomness in a computer system, what we really mean is pseudorandomness i.e. a simulation of the randomness of the world, hard to distinguish from “true randomness”. Take, for example, this very powerful simulation of randomness.

+

Randomness plays a huge role in privacy technologies and cryptography. It is amazing how randomly generated value XORed with a message provides a simple yet powerful encryption scheme. Even the simplest form of private communication between two parties (i.e. symmetric key encryption where the parties share a key in advance) requires the shared key to be random. If this shared key isn’t random, then the encryption isn’t useful anymore because anyone can determine the key if they knew the function and decrypt the message.

+

While randomness is important in the creation of secure channels of communication between two people, randomness can also play an important role in determining who is communicating with whom. If multiple people are trying to communicate with each other over a channel that has limited bandwidth, randomness can be used to determine a fair order in which the channel will carry the messages.

+

Randomness is also useful in helping a group of people or computers come to agreement, or consensus. Randomized consensus protocols are such an example. This post will explore the role of randomness in blockchains.

+

Blockchains are a perfect example of multiple parties trying to come to agreement, to consensus, on a certain update to a global view. The updates are often done in periodic discrete time intervals called rounds.

+

One thing that constrains consensus is that there is an upper bound on the number of messages that can be sent within a certain period of time on the internet (i.e. the throughput) and the time it takes for a message to be sent across the internet (i.e latency). The goal of any blockchain consensus protocol is to come to agreement within the above constraints. In a public blockchain where thousands of nodes participate in maintaining the blockchain, the network constraints make it expensive to come to consensus if every node needs to send messages to all other nodes and waits for responses from all other nodes. This is because the number of messages sent through the network in the period of a round is too large. Thus the communication scheme needs to be optimized by cutting down on the number of messages sent through the internet in order to ensure consensus.

+

The way that Bitcoin achieves this agreement (in a process known as Nakamoto consensus) is by using Proof of Work (PoW) as the source of randomness to determine which one block per round will get added to the blockchain, thereby reducing the message-passing overhead. Because PoW is computationally difficult to solve, only the person who first solves the puzzle can add their block to the ledger. The probability that multiple people will solve the puzzle at the same time is low, thus PoW acts as a mechanism that limits the number of messages in the network.

+

Naturally, any alternative consensus mechanism trying to replace PoW would also need a way to limit the messages passing through the network. The way most Proof of Stake (PoS) protocols approach this problem is by selecting a subset of validators (nodes that maintain/govern the blockchain) to form a subcommittee based on how many coins they have staked. This subset of validators can then reasonably communicate back and forth with each other within the network constraints to achieve agreement in a timely manner.

+

Of course, in order to choose the members of this subset fairly and in such a way that no one knows the members ahead of time, the algorithm must be able to access some fair, unbiasable source of randomness. Once this subset of validators reaches agreement on the next block, this block is broadcasted to everyone else in the network.

+

An ideal source of randomness that can be used in PoS protocols to select a subcommittee would need to be unbiasable i.e no one can tamper with the randomness seed. In order to be unbiasable, the randomness protocol needs to ensure the following:

  • i) there is always some output from the randomness function
  • ii) the output of the randomness function has not been manipulated (Note: we will explore the trade-off that emerges between i) and ii) and how it relates to network synchrony models in a future blog post)
+

None of the subcommittee selection protocols that we at Mechanism Labs have analyzed satisfied both properties of being unbiasable. Thus, given the above tradeoff, a blockchain consensus protocol should favor a randomness source that always produces some output rather than a randomness source that favors producing only a tamper free output. This is because a blockchain protocol would need to ensure that the chain keeps growing and cannot have the source of randomness be the bottleneck. Even for consistency-favoring protocols, the randomness source should not be the reason that the blockchain doesn’t make progress. If anything, the randomness source should enable the protocol to focus on mitigating other attacks like DoSing of the validator subcommittee which halt progress.

+

If the blockchain completely halted due to bias in the randomness function output, there would be a huge cost of coordination at the social layer to reboot the blockchain. This would require the community to agree on what the actual blockchain is through an external social media platform and would result long periods discussion. This cost is comparable to the cost of resolving the DAO Hack. This huge cost at the social layer would also shake the community’s faith in the security of the blockchain protocol. On the other hand, as long as the bias is small and there is a mechanism to recover from the state of bias, the effect of the slight bias for a few rounds on the security of the blockchain is low. This is because any in protocol rewards given out to validators in a public blockchain protocol every round are relatively small. Since there are subcommittee selections every round or epoch (a set of rounds), there would have to be significant bias in the randomness function for a validator to game the protocol to make money and reduce the security of the blockchain protocol. On the other hand, a lottery game utilizing randomness would absolutely need to ensure that the randomness source was not manipulated because even a little bias can completely change the winner of the lottery. This is because the effects of tampering with a lottery outcome are extreme as there is a large amount of immediate reward to gain. Such a game would thus favor a randomness function that guaranteed only a tamper free output, even it meant that there were times the randomness would halt.

+

This post will talk about the importance of an unbiased source of randomness in the context of designing blockchain protocols. For the rest of this post, when we consider unbiasable protocols, we favor protocols that do not halt.

+

The ideal committee selection scheme should i) be unbiasable and ii) only reveal the randomness seed at the start of a new round. A randomness function needs to be unbiasable by the definition discussed above. If the randomness can be manipulated (and there is an in protocol reward distribution mechanism), then this can cause an unfair distribution of rewards. A skewed reward distribution will mean some people will get rewarded at a higher rate. Since only those with stake can be validators and since voting power is proportional to the stake that a validator owns, an unfair distribution will ensure that some subset of validators eventually dominates the blockchain. Thus, the degree to which a protocol is unbiasable determines who can remain part of the network as a validator in the long run. On the other hand, the amount of time that the seed is revealed before the start of a new round determines who can be part of the network in the first place. Thus, the degree of both the above properties will determine how decentralized the blockchain is.

+

Since the protocol for selecting a subcommittee is run every round or epoch, the time from when the output of this randomness function is known to the the time when the round or epoch starts is crucial. In this time frame, an adversary can use the output of the randomness function to determine what the subset of validators was chosen. If this output was hidden, an adversary trying to attack the blockchain protocol wouldn’t be able to determine the validator subset ahead of time. The length of this time frame determines the strength of the adversary that a protocol can tolerate. A stronger adversary, i.e. an adversary with more computational power and resources to attack the network, will be able to compute the validator subset in a shorter time frame. If the adversary is given sufficient time, the adversary will be able to determine the validator subset by running the committee selection algorithm with the randomness seed for the given round. This enables the adversary to conduct a denial of service (DoS) attack on the validator subset for that particular round or epoch, leading to empty blocks and a lack of progress for that time period. Halting of a blockchain even for one round can be devastating. Think of what would happen to Bitcoin if no one in the world could make any transactions for a few hours! Thus, only nodes which are capable of being DoS resistant should become validators in the first place. Indirectly, revealing the seed ahead of time will mean the consensus protocol tolerates a weaker adversary, thus making the blockchain less decentralized.

+

Below we take a closer technical look at various randomness generation mechanisms. The section below is based on the that the different blockchain protocols that we analyzed in our Meta-Analysis of Alternative Consensus Protocols use:

+

Tendermint:

Tendermint uses a deterministic round-robin protocol scheme to choose proposers; there is no randomness in the protocol. Proposers are elected deterministically, through a heap sort based on the voting power and number of times the validator was chosen. The protocol can only be biased by adding or removing stake since that is the only input the adversary can access. The protocol cannot be biased instantaneously because of the long time it takes for a validator to unbond i.e remove stake from the system or bond i.e add stake to the system. Nevertheless, an attacker can plan ahead to bias the choice of proposer in a longer time frame.

+

Use of a deterministic random algorithm means the randomness seed is revealed well ahead of every round and the proposer can be determined ahead of time.

+

Algorand:

The randomness scheme in Algorand is as below: each validator v selected as a proposer computes a seed for round r using the equation < seedr, π > ← VRFskv (seedr−1||r) where skv is the secret key of validator v and seedr−1 is the randomness seed of the previous round.

The VRF proof π and the seed for round r are contained in the block accepted in round r − 1. If the given VRF proof doesn’t verify the given seed, then everyone computes the seed for the new round as H(seedr−1||r) where H is a hash function. This means that the secret key of each validator has to be chosen well in advance to ensure that they can’t bias the seed.

When the network is not strongly synchronous, the attacker has complete control over the message passing links, and can therefore drop block proposals and force users to agree on empty blocks, such that future selection seeds can be computed. Algorand thus requires the weak synchrony assumption that in every period of length u, there must be a strongly synchronous period s of length s < u for the protocol to be bias resistant. As long as s is suitably large enough that at least one honest block was produced in a time period of b, an adversary v′ choosing a key skv′ cannot bias the randomness seed for round r.

Only once a block has been proposed is the new seed and a VRF proof to verify it publicly known. This ensures that the proposer and the randomness seed is not leaked ahead of time. This guarantee ensures that Algorand can defend against DoS attacks on proposers and is adaptively secure in a model with erasures, even with instantaneous corruption.

+

Dfinity:

In several protocols adversaries usually abort the protocol to invoke the fall back mechanism, and thus introduce bias. The threshold scheme that Dfinity uses is bias resistant because the threshold is chosen in a way that ensures an adversary cannot abort the protocol by preventing a threshold signature from which the randomness seed is derived from being created. This requires t the threshold to be chosen according to the equation: t ∈ [f + 1, n − f ] where f is the number of signatures the adversary controls, n is the total number of signatures in the scheme, and t is the threshold of signatures required to generate randomness. This threshold is chosen to ensure that an adversary cannot predict the outcome of creation of a signature nor can the adversary prevent its creation.

+

One caveat is that if an adversary has more than 50% of all deposits in any BLS scheme they would be able to manipulate the final signature and randomness. However if an adversary had such a huge portion of stake there are other adversarial attacks and this breaks the fundamental assumptions of the Dfinity protocol.

+

The randomness seed is revealed once any honest validator has advanced to round r. While the time gap between an honest validator advancing and the new round officially starting is small, this gap is sufficient for an adversary with significant computational resources to identify and DoS proposers. This is the reason Dfinity can only tolerate mildly adaptive adversaries and not instantaneous corruptions.

+

Thunderella:

In the Random Oracle scheme instantiated with a Hash Function, a proposer is determined by the equation: H_nonce(pk,q) < Dp where H is the Hash function which is used as the Random Oracle, pk is the public key of the validator, q is the given time-step and nonce is the source of entropy to the hash function. The nonce is chosen by the proposer of the previous block. The difficulty parameter D_p is set such that a committee member is elected as the proposer with probability w in a single time step.

+

If the adversary proposes a block, she can bias the nonce used to generate entropy for the hash function of the next round, thus biasing the proposer elected in the next block. However, to mitigate the biasability of the randomness scheme, the same nonce is used in the Hash function to select proposers for r rounds rather than to merely select the next proposer. This makes it computationally harder for an adversary to brute force a nonce that will enable them to be the proposer for the next r rounds. While this strategy guarantees only a polynomial security loss, it comes with a predictability trade-off discussed later. This scheme results in being able to tolerate only a slow adaptive adversary

In the above algorithm, when the same nonce is reused to seed the Hash function it leads to an adversary being able to predict the proposers ahead of time. Since the same nonce is reused as entropy during an epoch, it leads to the randomness seed being leaked ahead of the start of a new round. This enables an adversary to corrupt or DoS proposers.

+

Casper FFG:

The randao scheme that Casper FFG plans to use is dependent on the algorithm below:

Every validator at the start of an epoch commits H (H (H (..Sv )))) where S is the seed the validator commits to. R := R⊕ Pre-image of the inner layer of hash. At a round, a validator can either make a block or abort.

If in the fall back mechanism no randomness is generated then that appears to be a bigger problem than the randomness being predictable or biasable because then you no longer need a 1/3rd malicious to abort liveness, 1 person is sufficient.

+

To the validator who is the current proposer the seed for the next round is known.

+

Randomness is a crucial part of Cryptography and Blockchains. Bad randomness schemes can break all notions of security in blockchains by i) halting the blockchain protocol ii) causing centralization. Thus, it is important to understand the role that randomness plays in blockchain protocols in order to understand how secure a blockchain truly is!

+

Thank you to Alexis Gauba and Zubin Koticha for their valuable feedback.

+
Stay up to date with the latest in blockchain and cryptocurrency

The Token Daily newsletter is the best way to keep up to date on important happenings in all things blockchain and cryptocurrency. Subscribe below and you’ll also receive exclusive token analysis articles from our team.

No thanks.
Author

Aparna Krishnan
Build things rather than adjust things. Co founder @Mechanism_Labs, UC Berkeley, ex-Head of Education @CalBlockchain, @thielfellowship