Skip to content

Commit 1d20dc3

Browse files
committed
changed 'and attacker' to 'an attacker' to improve readability and accuracy
1 parent 5807487 commit 1d20dc3

File tree

1 file changed

+1
-1
lines changed
  • src/content/roadmap/single-slot-finality

1 file changed

+1
-1
lines changed

src/content/roadmap/single-slot-finality/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ In this scheme, it is only possible for every validator to vote on a block by di
4444

4545
Since the Ethereum consensus mechanism was designed, the signature aggregation scheme (BLS) has been found to be far more scalable than was initially thought, while the ability of clients to process and verify signatures has also improved. It turns out that processing attestations from a huge number of validators is actually possible within a single slot. For example, with one million validators each voting twice in each slot, and slot times adjusted to be 16 seconds, nodes would be required to verify signatures at a minimum rate of 125,000 aggregations per second in order to process all 1 million attestations within the slot. In reality, it takes a normal computer around 500 nanoseconds to do one signature verification, meaning 125,000 can be done in ~62.5 ms - far below the one second threshold.
4646

47-
Further efficiency gains could be made by creating supercommittees of e.g. 125,000 randomly selected validators per slot. Only these validators get to vote on a block and therefore only this subset of validators decide whether a block is finalized. Whether this is a good idea or not comes down to how expensive the community would prefer a successful attack on Ethereum to be. This is because instead of requiring 2/3 of the total staked ether, and attacker could finalize a dishonest block with 2/3 of the staked ether _in that supercommittee_. This is still an active area of research, but it seems plausible that for a validator set sufficiently large to require supercommittees in the first place, the cost of attacking one of those subcommittees will be extremely high (e.g. the ETH denominated cost of attack would be `2/3 * 125,000 * 32 = ~2.6 million ETH`). The cost of attack can be adjusted by increasing the size of the validator set (e.g. tune the validator size so the cost of attack is equal to 1 million ether, 4 million ether, 10 million ether, etc). [Preliminary polls](https://youtu.be/ojBgyFl6-v4?t=755) of the community seem to suggest that 1-2 million ether is an acceptable cost of attack, which implies ~65,536 - 97,152 validators per supercommittee.
47+
Further efficiency gains could be made by creating supercommittees of e.g. 125,000 randomly selected validators per slot. Only these validators get to vote on a block and therefore only this subset of validators decide whether a block is finalized. Whether this is a good idea or not comes down to how expensive the community would prefer a successful attack on Ethereum to be. This is because instead of requiring 2/3 of the total staked ether, an attacker could finalize a dishonest block with 2/3 of the staked ether _in that supercommittee_. This is still an active area of research, but it seems plausible that for a validator set sufficiently large to require supercommittees in the first place, the cost of attacking one of those subcommittees will be extremely high (e.g. the ETH denominated cost of attack would be `2/3 * 125,000 * 32 = ~2.6 million ETH`). The cost of attack can be adjusted by increasing the size of the validator set (e.g. tune the validator size so the cost of attack is equal to 1 million ether, 4 million ether, 10 million ether, etc). [Preliminary polls](https://youtu.be/ojBgyFl6-v4?t=755) of the community seem to suggest that 1-2 million ether is an acceptable cost of attack, which implies ~65,536 - 97,152 validators per supercommittee.
4848

4949
However, verification is not the true bottleneck - it is signature aggregation that really challenges validator nodes. To scale signature aggregation will probably require increasing the number of validators in each subnet, increasing the number of subnets, or adding additional layers of aggregation (i.e. implement committees of committees). Part of the solution might be allowing specialized aggregators - similar to how block building and generating commitments for rollup data will be outsourced to specialized block builders under proposer-builder separation (PBS) and Danksharding.
5050

0 commit comments

Comments
 (0)