Skip to content

Commit 13ac373

Browse files
authored
Merge pull request #3870 from fradamt/no-peer-sampling
Spec without peer sampling
2 parents d8cbca7 + 04ee34c commit 13ac373

File tree

7 files changed

+250
-135
lines changed

7 files changed

+250
-135
lines changed

configs/mainnet.yaml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -162,11 +162,10 @@ WHISK_PROPOSER_SELECTION_GAP: 2
162162
# EIP7594
163163
NUMBER_OF_COLUMNS: 128
164164
MAX_CELLS_IN_EXTENDED_MATRIX: 768
165-
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
165+
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 128
166166
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
167167
SAMPLES_PER_SLOT: 8
168-
CUSTODY_REQUIREMENT: 1
169-
TARGET_NUMBER_OF_PEERS: 70
168+
CUSTODY_REQUIREMENT: 4
170169

171170
# [New in Electra:EIP7251]
172171
MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: 128000000000 # 2**7 * 10**9 (= 128,000,000,000)

configs/minimal.yaml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -161,11 +161,10 @@ WHISK_PROPOSER_SELECTION_GAP: 1
161161
# EIP7594
162162
NUMBER_OF_COLUMNS: 128
163163
MAX_CELLS_IN_EXTENDED_MATRIX: 768
164-
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 32
164+
DATA_COLUMN_SIDECAR_SUBNET_COUNT: 128
165165
MAX_REQUEST_DATA_COLUMN_SIDECARS: 16384
166166
SAMPLES_PER_SLOT: 8
167-
CUSTODY_REQUIREMENT: 1
168-
TARGET_NUMBER_OF_PEERS: 70
167+
CUSTODY_REQUIREMENT: 4
169168

170169
# [New in Electra:EIP7251]
171170
MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: 64000000000 # 2**6 * 10**9 (= 64,000,000,000)

pysetup/spec_builders/eip7594.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,14 @@ def imports(cls, preset_name: str):
1212
return f'''
1313
from eth2spec.deneb import {preset_name} as deneb
1414
'''
15+
16+
17+
@classmethod
18+
def sundry_functions(cls) -> str:
19+
return """
20+
def retrieve_column_sidecars(beacon_block_root: Root) -> Sequence[DataColumnSidecar]:
21+
return []
22+
"""
1523

1624
@classmethod
1725
def hardcoded_custom_type_dep_constants(cls, spec_object) -> str:

specs/_features/eip7594/das-core.md

Lines changed: 15 additions & 123 deletions
Original file line numberDiff line numberDiff line change
@@ -18,26 +18,19 @@
1818
- [Containers](#containers)
1919
- [`DataColumnSidecar`](#datacolumnsidecar)
2020
- [`MatrixEntry`](#matrixentry)
21-
- [Helper functions](#helper-functions)
22-
- [`get_custody_columns`](#get_custody_columns)
23-
- [`compute_extended_matrix`](#compute_extended_matrix)
24-
- [`recover_matrix`](#recover_matrix)
25-
- [`get_data_column_sidecars`](#get_data_column_sidecars)
26-
- [`get_extended_sample_count`](#get_extended_sample_count)
21+
- [Helper functions](#helper-functions)
22+
- [`get_custody_columns`](#get_custody_columns)
23+
- [`compute_extended_matrix`](#compute_extended_matrix)
24+
- [`recover_matrix`](#recover_matrix)
25+
- [`get_data_column_sidecars`](#get_data_column_sidecars)
2726
- [Custody](#custody)
2827
- [Custody requirement](#custody-requirement)
2928
- [Public, deterministic selection](#public-deterministic-selection)
30-
- [Peer discovery](#peer-discovery)
29+
- [Subnet sampling](#subnet-sampling)
3130
- [Extended data](#extended-data)
3231
- [Column gossip](#column-gossip)
3332
- [Parameters](#parameters)
34-
- [Peer sampling](#peer-sampling)
35-
- [Sample selection](#sample-selection)
36-
- [Sample queries](#sample-queries)
37-
- [Peer scoring](#peer-scoring)
3833
- [Reconstruction and cross-seeding](#reconstruction-and-cross-seeding)
39-
- [DAS providers](#das-providers)
40-
- [A note on fork choice](#a-note-on-fork-choice)
4134
- [FAQs](#faqs)
4235
- [Row (blob) custody](#row-blob-custody)
4336
- [Subnet stability](#subnet-stability)
@@ -75,15 +68,14 @@ The following values are (non-configurable) constants used throughout the specif
7568

7669
| Name | Value | Description |
7770
| - | - | - |
78-
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `32` | The number of data column sidecar subnets used in the gossipsub protocol |
71+
| `DATA_COLUMN_SIDECAR_SUBNET_COUNT` | `128` | The number of data column sidecar subnets used in the gossipsub protocol |
7972

8073
### Custody setting
8174

8275
| Name | Value | Description |
8376
| - | - | - |
8477
| `SAMPLES_PER_SLOT` | `8` | Number of `DataColumnSidecar` random samples a node queries per slot |
85-
| `CUSTODY_REQUIREMENT` | `1` | Minimum number of subnets an honest node custodies and serves samples from |
86-
| `TARGET_NUMBER_OF_PEERS` | `70` | Suggested minimum peer count |
78+
| `CUSTODY_REQUIREMENT` | `4` | Minimum number of subnets an honest node custodies and serves samples from |
8779

8880
### Containers
8981

@@ -109,9 +101,9 @@ class MatrixEntry(Container):
109101
row_index: RowIndex
110102
```
111103

112-
### Helper functions
104+
## Helper functions
113105

114-
#### `get_custody_columns`
106+
### `get_custody_columns`
115107

116108
```python
117109
def get_custody_columns(node_id: NodeID, custody_subnet_count: uint64) -> Sequence[ColumnIndex]:
@@ -141,7 +133,7 @@ def get_custody_columns(node_id: NodeID, custody_subnet_count: uint64) -> Sequen
141133
])
142134
```
143135

144-
#### `compute_extended_matrix`
136+
### `compute_extended_matrix`
145137

146138
```python
147139
def compute_extended_matrix(blobs: Sequence[Blob]) -> List[MatrixEntry, MAX_CELLS_IN_EXTENDED_MATRIX]:
@@ -164,7 +156,7 @@ def compute_extended_matrix(blobs: Sequence[Blob]) -> List[MatrixEntry, MAX_CELL
164156
return extended_matrix
165157
```
166158

167-
#### `recover_matrix`
159+
### `recover_matrix`
168160

169161
```python
170162
def recover_matrix(partial_matrix: Sequence[MatrixEntry],
@@ -191,7 +183,7 @@ def recover_matrix(partial_matrix: Sequence[MatrixEntry],
191183
return extended_matrix
192184
```
193185

194-
#### `get_data_column_sidecars`
186+
### `get_data_column_sidecars`
195187

196188
```python
197189
def get_data_column_sidecars(signed_block: SignedBeaconBlock,
@@ -227,48 +219,6 @@ def get_data_column_sidecars(signed_block: SignedBeaconBlock,
227219
return sidecars
228220
```
229221

230-
#### `get_extended_sample_count`
231-
232-
```python
233-
def get_extended_sample_count(allowed_failures: uint64) -> uint64:
234-
assert 0 <= allowed_failures <= NUMBER_OF_COLUMNS // 2
235-
"""
236-
Return the sample count if allowing failures.
237-
238-
This helper demonstrates how to calculate the number of columns to query per slot when
239-
allowing given number of failures, assuming uniform random selection without replacement.
240-
Nested functions are direct replacements of Python library functions math.comb and
241-
scipy.stats.hypergeom.cdf, with the same signatures.
242-
"""
243-
244-
def math_comb(n: int, k: int) -> int:
245-
if not 0 <= k <= n:
246-
return 0
247-
r = 1
248-
for i in range(min(k, n - k)):
249-
r = r * (n - i) // (i + 1)
250-
return r
251-
252-
def hypergeom_cdf(k: uint64, M: uint64, n: uint64, N: uint64) -> float:
253-
# NOTE: It contains float-point computations.
254-
# Convert uint64 to Python integers before computations.
255-
k = int(k)
256-
M = int(M)
257-
n = int(n)
258-
N = int(N)
259-
return sum([math_comb(n, i) * math_comb(M - n, N - i) / math_comb(M, N)
260-
for i in range(k + 1)])
261-
262-
worst_case_missing = NUMBER_OF_COLUMNS // 2 + 1
263-
false_positive_threshold = hypergeom_cdf(0, NUMBER_OF_COLUMNS,
264-
worst_case_missing, SAMPLES_PER_SLOT)
265-
for sample_count in range(SAMPLES_PER_SLOT, NUMBER_OF_COLUMNS + 1):
266-
if hypergeom_cdf(allowed_failures, NUMBER_OF_COLUMNS,
267-
worst_case_missing, sample_count) <= false_positive_threshold:
268-
break
269-
return sample_count
270-
```
271-
272222
## Custody
273223

274224
### Custody requirement
@@ -285,17 +235,9 @@ The particular columns that a node custodies are selected pseudo-randomly as a f
285235

286236
*Note*: increasing the `custody_size` parameter for a given `node_id` extends the returned list (rather than being an entirely new shuffle) such that if `custody_size` is unknown, the default `CUSTODY_REQUIREMENT` will be correct for a subset of the node's custody.
287237

288-
## Peer discovery
289-
290-
At each slot, a node needs to be able to readily sample from *any* set of columns. To this end, a node SHOULD find and maintain a set of diverse and reliable peers that can regularly satisfy their sampling demands.
238+
## Subnet sampling
291239

292-
A node runs a background peer discovery process, maintaining at least `TARGET_NUMBER_OF_PEERS` of various custody distributions (both `custody_size` and column assignments). The combination of advertised `custody_size` size and public node-id make this readily and publicly accessible.
293-
294-
`TARGET_NUMBER_OF_PEERS` should be tuned upward in the event of failed sampling.
295-
296-
*Note*: while high-capacity and super-full nodes are high value with respect to satisfying sampling requirements, a node SHOULD maintain a distribution across node capacities as to not centralize the p2p graph too much (in the extreme becomes hub/spoke) and to distribute sampling load better across all nodes.
297-
298-
*Note*: A DHT-based peer discovery mechanism is expected to be utilized in the above. The beacon-chain network currently utilizes discv5 in a similar method as described for finding peers of particular distributions of attestation subnets. Additional peer discovery methods are valuable to integrate (e.g., latent peer discovery via libp2p gossipsub) to add a defense in breadth against one of the discovery methods being attacked.
240+
At each slot, a node advertising `custody_subnet_count` downloads a minimum of `subnet_sampling_size = max(SAMPLES_PER_SLOT, custody_subnet_count)` total subnets. The corresponding set of columns is selected by `get_custody_columns(node_id, subnet_sampling_size)`, so that in particular the subset of columns to custody is consistent with the output of `get_custody_columns(node_id, custody_subnet_count)`. Sampling is considered successful if the node manages to retrieve all selected columns.
299241

300242
## Extended data
301243

@@ -309,36 +251,6 @@ For each column -- use `data_column_sidecar_{subnet_id}` subnets, where `subnet_
309251

310252
Verifiable samples from their respective column are distributed on the assigned subnet. To custody a particular column, a node joins the respective gossipsub subnet. If a node fails to get a column on the column subnet, a node can also utilize the Req/Resp protocol to query the missing column from other peers.
311253

312-
## Peer sampling
313-
314-
### Sample selection
315-
316-
At each slot, a node SHOULD select at least `SAMPLES_PER_SLOT` column IDs for sampling. It is recommended to use uniform random selection without replacement based on local randomness. Sampling is considered successful if the node manages to retrieve all selected columns.
317-
318-
Alternatively, a node MAY use a method that selects more than `SAMPLES_PER_SLOT` columns while allowing some missing, respecting the same target false positive threshold (the probability of successful sampling of an unavailable block) as dictated by the `SAMPLES_PER_SLOT` parameter. If using uniform random selection without replacement, a node can use the `get_extended_sample_count(allowed_failures) -> sample_count` helper function to determine the sample count (number of unique column IDs) for any selected number of allowed failures. Sampling is then considered successful if any `sample_count - allowed_failures` columns are retrieved successfully.
319-
320-
For reference, the table below shows the number of samples and the number of allowed missing columns assuming `NUMBER_OF_COLUMNS = 128` and `SAMPLES_PER_SLOT = 16`.
321-
322-
| Allowed missing | 0| 1| 2| 3| 4| 5| 6| 7| 8|
323-
|-----------------|--|--|--|--|--|--|--|--|--|
324-
| Sample count |16|20|24|27|29|32|35|37|40|
325-
326-
### Sample queries
327-
328-
A node SHOULD maintain a diverse set of peers for each column and each slot by verifying responsiveness to sample queries.
329-
330-
A node SHOULD query for samples from selected peers via `DataColumnSidecarsByRoot` request. A node utilizes `get_custody_columns` helper to determine which peer(s) it could request from, identifying a list of candidate peers for each selected column.
331-
332-
If more than one candidate peer is found for a given column, a node SHOULD randomize its peer selection to distribute sample query load in the network. Nodes MAY use peer scoring to tune this selection (for example, by using weighted selection or by using a cut-off threshold). If possible, it is also recommended to avoid requesting many columns from the same peer in order to avoid relying on and exposing the sample selection to a single peer.
333-
334-
If a node already has a column because of custody, it is not required to send out queries for that column.
335-
336-
If a node has enough good/honest peers across all columns, and the data is being made available, the above procedure has a high chance of success.
337-
338-
## Peer scoring
339-
340-
Due to the deterministic custody functions, a node knows exactly what a peer should be able to respond to. In the event that a peer does not respond to samples of their custodied rows/columns, a node may downscore or disconnect from a peer.
341-
342254
## Reconstruction and cross-seeding
343255

344256
If the node obtains 50%+ of all the columns, it SHOULD reconstruct the full data matrix via `recover_matrix` helper. Nodes MAY delay this reconstruction allowing time for other columns to arrive over the network. If delaying reconstruction, nodes may use a random delay in order to desynchronize reconstruction among nodes, thus reducing overall CPU load.
@@ -351,26 +263,6 @@ Once the node obtains a column through reconstruction, the node MUST expose the
351263

352264
*Note*: There may be anti-DoS and quality-of-service considerations around how to send samples and consider samples -- is each individual sample a message or are they sent in aggregate forms.
353265

354-
## DAS providers
355-
356-
A DAS provider is a consistently-available-for-DAS-queries, super-full (or high capacity) node. To the p2p, these look just like other nodes but with high advertised capacity, and they should generally be able to be latently found via normal discovery.
357-
358-
DAS providers can also be found out-of-band and configured into a node to connect to directly and prioritize. Nodes can add some set of these to their local configuration for persistent connection to bolster their DAS quality of service.
359-
360-
Such direct peering utilizes a feature supported out of the box today on all nodes and can complement (and reduce attackability and increase quality-of-service) alternative peer discovery mechanisms.
361-
362-
## A note on fork choice
363-
364-
*Fork choice spec TBD, but it will just be a replacement of `is_data_available()` call in Deneb with column sampling instead of full download. Note the `is_data_available(slot_N)` will likely do a `-1` follow distance so that you just need to check the availability of slot `N-1` for slot `N` (starting with the block proposer of `N`).*
365-
366-
The fork choice rule (essentially a DA filter) is *orthogonal to a given DAS design*, other than the efficiency of a particular design impacting it.
367-
368-
In any DAS design, there are probably a few degrees of freedom around timing, acceptability of short-term re-orgs, etc.
369-
370-
For example, the fork choice rule might require validators to do successful DAS on slot `N` to be able to include block of slot `N` in its fork choice. That's the tightest DA filter. But trailing filters are also probably acceptable, knowing that there might be some failures/short re-orgs but that they don't hurt the aggregate security. For example, the rule could be — DAS must be completed for slot N-1 for a child block in N to be included in the fork choice.
371-
372-
Such trailing techniques and their analysis will be valuable for any DAS construction. The question is — can you relax how quickly you need to do DA and in the worst case not confirm unavailable data via attestations/finality, and what impact does it have on short-term re-orgs and fast confirmation rules.
373-
374266
## FAQs
375267

376268
### Row (blob) custody
Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# EIP-7594 -- Fork Choice
2+
3+
## Table of contents
4+
<!-- TOC -->
5+
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
6+
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
7+
8+
- [Introduction](#introduction)
9+
- [Helpers](#helpers)
10+
- [Modified `is_data_available`](#modified-is_data_available)
11+
- [Updated fork-choice handlers](#updated-fork-choice-handlers)
12+
- [Modified `on_block`](#modified-on_block)
13+
14+
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
15+
<!-- /TOC -->
16+
17+
## Introduction
18+
19+
This is the modification of the fork choice accompanying EIP-7594.
20+
21+
## Helpers
22+
23+
### Modified `is_data_available`
24+
25+
```python
26+
def is_data_available(beacon_block_root: Root) -> bool:
27+
# `retrieve_column_sidecars` is implementation and context dependent, replacing
28+
# `retrieve_blobs_and_proofs`. For the given block root, it returns all column
29+
# sidecars to sample, or raises an exception if they are not available.
30+
# The p2p network does not guarantee sidecar retrieval outside of
31+
# `MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS` epochs.
32+
column_sidecars = retrieve_column_sidecars(beacon_block_root)
33+
return all(
34+
verify_data_column_sidecar_kzg_proofs(column_sidecar)
35+
for column_sidecar in column_sidecars
36+
)
37+
```
38+
39+
## Updated fork-choice handlers
40+
41+
### Modified `on_block`
42+
43+
*Note*: The only modification is that `is_data_available` does not take `blob_kzg_commitments` as input.
44+
45+
```python
46+
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
47+
"""
48+
Run ``on_block`` upon receiving a new block.
49+
"""
50+
block = signed_block.message
51+
# Parent block must be known
52+
assert block.parent_root in store.block_states
53+
# Make a copy of the state to avoid mutability issues
54+
state = copy(store.block_states[block.parent_root])
55+
# Blocks cannot be in the future. If they are, their consideration must be delayed until they are in the past.
56+
assert get_current_slot(store) >= block.slot
57+
58+
# Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
59+
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
60+
assert block.slot > finalized_slot
61+
# Check block is a descendant of the finalized block at the checkpoint finalized slot
62+
finalized_checkpoint_block = get_checkpoint_block(
63+
store,
64+
block.parent_root,
65+
store.finalized_checkpoint.epoch,
66+
)
67+
assert store.finalized_checkpoint.root == finalized_checkpoint_block
68+
69+
# [Modified in EIP7594]
70+
assert is_data_available(hash_tree_root(block))
71+
72+
# Check the block is valid and compute the post-state
73+
block_root = hash_tree_root(block)
74+
state_transition(state, signed_block, True)
75+
76+
# Add new block to the store
77+
store.blocks[block_root] = block
78+
# Add new state for this block to the store
79+
store.block_states[block_root] = state
80+
81+
# Add block timeliness to the store
82+
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
83+
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
84+
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
85+
store.block_timeliness[hash_tree_root(block)] = is_timely
86+
87+
# Add proposer score boost if the block is timely and not conflicting with an existing block
88+
is_first_block = store.proposer_boost_root == Root()
89+
if is_timely and is_first_block:
90+
store.proposer_boost_root = hash_tree_root(block)
91+
92+
# Update checkpoints in store if necessary
93+
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
94+
95+
# Eagerly compute unrealized justification and finality.
96+
compute_pulled_up_tip(store, block_root)
97+
```

0 commit comments

Comments
 (0)