Skip to content

Conversation

@philknows
Copy link
Member

Motivation

This marks the release of v1.39.0 at RC.4. Supersedes #8766.

twoeths and others added 30 commits December 11, 2025 14:19
**Motivation**

- once we have `state-transition-z`, we're not able to get
`index2pubkey` from a light view of BeaconState in beacon-node

**Description**

- in `beacon-node`, use `index2pubkey` of BeaconChain instead as a
preparation for working with `state-transition-z`
- it's ok to use `state.epochCtx.index2pubkey` in `state-transition`
since it can access the full state there

part of #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motiviation**

All networks have completed the merge transition and most execution
clients no longer support pre-merge so it's not even possible anymore to
run a network from a genesis before bellatrix, unless you keep it to
phase0/altair only, which still works after this PR is merged.

This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.

**Description**

Removes all code related to performing the merge transition. Running the
node pre-merge (CL only mode) is still possible and syncing still works.
Also removed a few CLI flags we added for the merge specifically, those
shouldn't be used anymore. Spec constants like
`TERMINAL_TOTAL_DIFFICULTY` are kept for spec compliance and ssz types
(like `PowBlock`) as well. I had to disable a few spec tests related to
handling the merge block since those code paths are removed.

Closes #8661
**Motivation**

- as a preparation for lodestar-z integration, we should not access
pubkey2index from CachedBeaconState

**Description**

- use that from BeaconChain instead

part of #8652

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**

- improve memory by transferirng gossipsub message data from network
thread to the main thread
- In snappy decompression in #8647 we had to do `Buffer.alloc()` instead
of `Buffer.allocUnsafe()`. We don't have to feel bad about that because
`Buffer.allocUnsafe()` does not work with this PR, and we don't waste
any memory.

**Description**

- use `transferList` param when posting messages from network thread to
the main thread

part of #8629

**Testing**
I've tested this on `feat2` for 3 days, the previous branch was #8671 so
it's basically the current stable, does not see significant improvement
but some good data for different nodes
- no change on 1k or `novc`
- on hoodi `sas` node we have better memory there on main thread with
same mesh peers, same memory on network thread

<img width="851" height="511" alt="Screenshot 2025-12-12 at 11 05 27"
src="https://github.com/user-attachments/assets/8d7b2c2f-8213-4f89-87e0-437d016bc24a"
/>

- on mainnnet `sas` node, we have better memory on network thread, a
little bit worse on the main thread
<img width="854" height="504" alt="Screenshot 2025-12-12 at 11 08 42"
src="https://github.com/user-attachments/assets/7e638149-2dbe-4c7e-849c-ef78f6ff4d6f"
/>

- but for this mainnet node, the most interesting metric is `forward msg
avg peers`, we're faster than majority of them

<img width="1378" height="379" alt="Screenshot 2025-12-12 at 11 11 00"
src="https://github.com/user-attachments/assets/3ba5eeaa-5a11-4cad-adfa-1e0f68a81f16"
/>

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**

All networks are post-electra now and transition period is completed,
which means due to [EIP-6110](https://eips.ethereum.org/EIPS/eip-6110)
we no longer need to process deposits via eth1 bridge as those are now
processed by the execution layer.

This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.

**Description**

Removes all code related to eth1 bridge mechanism to include new
deposits

- removed all eth1 related code, we can no longer produce blocks with
deposits pre-electra (syncing blocks still works)
- building a genesis state from eth1 is no longer supported (only for
testing)
- removed various db repositories related to deposits/eth1 data
- removed various `lodestar_eth1_*` metrics and dashboard panels
- deprecated all `--eth1.*` flags (but kept for backward compatibility)
- moved shared utility functions from eth1 to execution engine module

Closes #7682
Closes #8654
`yarn build:watch` and `yarn build:ifchanged` no longer work since
#8675 since `lerna exec`
requires to install a separate package `@lerna-lite/exec` to work
properly
**Motivation**

- as a preparation for lodestar-z integration, we should not access
config from any cached BeaconState

**Description**

- use chain.config instead

part of #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**

<!-- Why does this PR exist? What are the goals of the pull request? -->

This PR is to fix eth-clients/mainnet#13

Already created this as well
eth-clients/mainnet#14

<!-- A clear and concise general description of the changes of this pull
request. -->

<!-- If applicable, add screenshots to help explain your solution -->

<!-- Link to issues: Resolves #111, Resolves #222 -->

Closes #issue_number

**AI Assistance Disclosure**

- [ ] External Contributors: I have read the [contributor
guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice)
and disclosed my usage of AI below.

<!-- Insert any AI assistance disclosure here -->
**Motivation**

As noted in
#8680 (comment)
we cannot sync through bellatrix anymore. While I don't think it's a big
deal it's simple enough to keep that functionality as that code is
pretty isolated and won't get in our way during refactors and with gloas
won't be part of the block processing pipeline anymore due to
block/payload separation.


**Description**

Restore code required to perform sync through bellatrix
- re-added `isExecutionEnabled()` and `isMergeTransitionComplete()`
checks during block processing
- enabled some spec tests again that were previously skipped
- mostly copied original code removed in
[#8680](#8680) but cleaned up
some comments and simplified a bit
…8669)

**Motivation**

Closes #8606

**Description**

This updates our implementation to be compliant with latest spec
ethereum/beacon-APIs#368.

For sync committee aggregation selection (unchanged)
-  we call `submitSyncCommitteeSelections` at the start of the slot
- the timeout is still based on `CONTRIBUTION_DUE_BPS` into the slot (8
seconds)
-  we call the endpoint for all duties of this slot
-  logic has been moved to duties service


For attestation aggregation selection
- we call `submitBeaconCommitteeSelections` at the start of the epoch
for current and next epoch (2 separate calls)
- the timeout uses default which is based on `SLOT_DURATION_MS` (12
seconds)
- we only call `prepareBeaconCommitteeSubnet` once the above call either
resolved or failed, this should be fine as it's not that time sensitive
(one epoch lookahead)
- if duties are reorged, we will call `submitBeaconCommitteeSelections`
with duties of affected epoch
- logic has been moved to duties service


Previous PR #5344
…8708)

Since #8669 we might call the
committee selection apis even if we don't have any duties which is
unnecessary and charon doesn't like it.

```
lodestar-1  | Dec-19 16:16:47.001[]                error: Error on sync committee aggregation selection slot=13278082 - JSON is not an array
lodestar-1  | Error: JSON is not an array
lodestar-1  |     at value_fromJsonArray (file:///usr/app/node_modules/@chainsafe/ssz/src/type/arrayBasic.ts:162:11)
lodestar-1  |     at ListCompositeType.fromJson (file:///usr/app/node_modules/@chainsafe/ssz/src/type/array.ts:121:12)
lodestar-1  |     at ApiResponse.value (file:///usr/app/packages/api/src/utils/client/response.ts:115:51)
lodestar-1  |     at SyncCommitteeDutiesService.runDistributedAggregationSelectionTasks (file:///usr/app/packages/validator/src/services/syncCommitteeDuties.ts:385:36)
lodestar-1  |     at processTicksAndRejections (node:internal/process/task_queues:103:5)
```
These errors aren't really critical and might be common right now
because we moved from per slot to per epoch in
#8669 and not all validator
clients doing the same will cause calls to time out if signature
threshold in DVT middleware is not reached.
**Motivation**

- we will not be able to access `pubkey2index` or `index2pubkey` once we
switch to a native state-transition so we need to be prepared for that

**Description**

- pass `pubkey2index`, `index2pubkey` from cli instead
- in the future, we should find a way to extract them given a
BeaconState so that we don't have to depend on any implementations of
BeaconStateView, see
#8706 (comment)

Closes #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**

- we use the whole CachedBeaconStateAllForks to get all block
signatures, turn out we only need the validator indices of the current
SyncCommittee

**Description**

given this `getConfig` api: 
```typescript
getDomain(domainSlot: Slot, domainType: DomainType, messageSlot?: Slot): Uint8Array
```

we currently pass `state.slot` as the 1st param. However it's the same
to `block.slot` in `state-transition` and the same epoch when we verify
blocks in batch in
[beacon-node](https://github.com/ChainSafe/lodestar/blob/b255111a2013d43d5f65889274294e2740493c28/packages/beacon-node/src/chain/blocks/verifyBlock.ts#L62)

- so we can just use `block.slot` instead of passing the whole
CachedBeaconStateAllForks in `getBlockSignatureSets()` api
- still have to pass in `currentSyncCommitteeIndexed` instead

part of #8650

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
…ition (#8716)

**Motivation**


#8711 (review)

**Description**

Prevent duplicate aggregates passing gossip validation due to race
condition by checking again if we've seen the aggregate before inserting
it into op pool. This is required since we run multiple async operations
in-between first check and inserting it into op pool.


<img width="942" height="301" alt="image"
src="https://github.com/user-attachments/assets/2701a92e-7733-4de3-bf4a-ac853fd5c0b7"
/>

`AlreadyKnown` disappears since we now filter those out properly during
gossip validation which is important since we don't wanna re-gossip
those aggregates.
**Motivation**

- the reward apis tightly couple to state-transition functions like
`beforeProcessEpoch() processBlock() processAttestationAltair()` so it
needs to be moved there

**Description**

- move api type definitions to `types` package so that it can be used
everywhere
- move reward apis implementation to `state-transition` package

Closes #8690

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**

A recent internal infrastructure change includes renaming default host
instance on our dashboards.

**Description**

This PR allows our dashboards to easily default to the new
infrastructure host instance for display.
**Motivation**

- @philknows wants

**Description**

- allow multiple index ranges in validator
- this should only be used in testing and not in production
**Motivation**

Use the latest package manager which is more aligned with multi-runtime
support.

**Description**

- Migrate yarn.lock file to pnpm-lock.yaml (`pnpm import`)
- Update the scripts to use pnpm 
- Update the workflows to use pnpm

**Steps to test or reproduce**

- Run all tests

**Useful commands migraiton**

| Yarn 1 | pnpm | 
|---|---| 
| yarn | pnpm install | 
| yarn add dep | pnpm add dep | 
| yarn workspace "@lodestar/config" add dep | pnpm add dep --filter
"@lodestar/config" |
| yarn workspace foreach run build | pnpm -r build |
Updated Dockerfile to streamline installation and build process.

**Motivation**

After the last PR, the dockerfile wasn't building anymore. This PR fixes
it.

**Description**

<!-- A clear and concise general description of the changes of this pull
request. -->

<!-- If applicable, add screenshots to help explain your solution -->

<!-- Link to issues: Resolves #111, Resolves #222 -->

Closes #issue_number

**AI Assistance Disclosure**

- [ ] External Contributors: I have read the [contributor
guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice)
and disclosed my usage of AI below.

<!-- Insert any AI assistance disclosure here -->
Fixing
[this](https://github.com/ethpandaops/ethereum-package/actions/runs/20928843912/job/60134147175#step:5:482)
```sh
  Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'triple-beam' imported from /usr/app/packages/logger/lib/interface.js
      at Object.getPackageJSONURL (node:internal/modules/package_json_reader:316:9)
      at packageResolve (node:internal/modules/esm/resolve:768:81)
      at moduleResolve (node:internal/modules/esm/resolve:858:18)
      at defaultResolve (node:internal/modules/esm/resolve:990:11)
      at #cachedDefaultResolve (node:internal/modules/esm/loader:718:20)
      at #resolveAndMaybeBlockOnLoaderThread (node:internal/modules/esm/loader:735:38)
      at ModuleLoader.resolveSync (node:internal/modules/esm/loader:764:52)
      at #resolve (node:internal/modules/esm/loader:700:17)
      at ModuleLoader.getOrCreateModuleJob (node:internal/modules/esm/loader:620:35)
      at ModuleJob.syncLink (node:internal/modules/esm/module_job:143:33) {
    code: 'ERR_MODULE_NOT_FOUND'
  }
  
  Node.js v24.12.0
```

**Motivation**

<!-- Why does this PR exist? What are the goals of the pull request? -->

**Description**

<!-- A clear and concise general description of the changes of this pull
request. -->

<!-- If applicable, add screenshots to help explain your solution -->

<!-- Link to issues: Resolves #111, Resolves #222 -->

Closes #issue_number

**AI Assistance Disclosure**

- [ ] External Contributors: I have read the [contributor
guidelines](https://github.com/ChainSafe/lodestar/blob/unstable/CONTRIBUTING.md#ai-assistance-notice)
and disclosed my usage of AI below.

<!-- Insert any AI assistance disclosure here -->
**Motivation**

Add `semver` dependency to the workflow.

**Description**

- Add `semver` dependency to the package.json

Used here 


https://github.com/ChainSafe/lodestar/blob/b79f41e05427f9ce5f5371a93ad0a9f6b89d6f2b/.github/workflows/publish-dev.yml#L57
**Motivation**

This PR is to enforce using nodeJS patched builds due to [January 13,
2026 Security
Releases](https://nodejs.org/en/blog/vulnerability/december-2025-security-releases)

**Description**

This PR will ensure that source builds will only use patched versions of
nodeJS that are not vulnerable to the disclosed issues above.

---------

Co-authored-by: Nazar Hussain <nazarhussain@gmail.com>
**Motivation**

Make the CI publish workflow work. 

**Description**

- Add a dependency which was missing 

Used here 


https://github.com/ChainSafe/lodestar/blob/7ac2136122a9bbe20b2c068acd55c48652740054/scripts/release/utils.mjs#L5
**Motivation**

Fix the root level binary path

**Description**

- Add `@ChainSafe/lodestar` to root level dependency
**Motivation**

Use the `package/cli/bin/lodestar.js` as execution path for the binary.

**Description**

- Use the binary path from the cli package

Closes #8755
twoeths and others added 8 commits January 16, 2026 15:24
**Motivation**

- to get proposer's Signature Set, we pass CachedBeaconStateAllForks
just to get config and state slot

**Description**

- pass config and state slot instead of `CachedBeaconStateAllForks`

part of #8657

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
Refactor so our codebase is more aligned with the recent spec change.
Majority of the refactor is in `getExpectedWithdrawals`.

See ethereum/consensus-specs#4766 and ethereum/consensus-specs#4765 for
context.

- Logic of each of the (gloas's builder withdrawals, electra's pending
partial withdrawals and capella's sweep withdrawals) will live in their
own respective function
- `withdrawnBalances` is replaced by `balanceAfterWithdrawals`. Instead
of tracking amounts getting withdrawn, it tracks remaining balance of
each validator after withdrawals.

Note the builder pending withdrawal logic is outdated. It is out of this
refactor PR's scope. Will update it in the following PR.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Both eth-docker and rocketpool rely on `node_modules/.bin/lodestar` and
we wanna avoid breaking existing deployments.
)

Follow up on #8761, we run
`pnpm install --frozen-lockfile --prod` during docker build and right
now we don't create `.bin/lodestar` as cli package is a dev dependency.
Our docker and binary size is 2x since we switched to pnpm but this
fixes it by correctly pruning dev dependencies.

<img width="1854" height="210" alt="image"
src="https://github.com/user-attachments/assets/6f651370-20ca-4492-b520-ad692c936b87"
/>


<img width="1826" height="203" alt="image"
src="https://github.com/user-attachments/assets/52871f34-191e-4705-9f9f-00292c02b463"
/>


After the fix

<img width="1843" height="198" alt="image"
src="https://github.com/user-attachments/assets/f3c3a8ea-8656-454e-bb80-84b10f91d8ce"
/>
Using `pnpm prune --prod` is causing strange dependency errors

```
/home/runner/work/lodestar/lodestar/lodestar dev
Unpacking Lodestar binary, please wait...
node:internal/modules/package_json_reader:316
  throw new ERR_MODULE_NOT_FOUND(packageName, fileURLToPath(base), null);
        ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'bigint-buffer' imported from /tmp/caxa/applications/lodestar/aegznqzqps/0/packages/utils/lib/bytes/browser.js
    at Object.getPackageJSONURL (node:internal/modules/package_json_reader:316:9)
    at packageResolve (node:internal/modules/esm/resolve:768:81)
    at moduleResolve (node:internal/modules/esm/resolve:858:18)
    at defaultResolve (node:internal/modules/esm/resolve:990:11)
    at #cachedDefaultResolve (node:internal/modules/esm/loader:718:20)
    at #resolveAndMaybeBlockOnLoaderThread (node:internal/modules/esm/loader:735:38)
    at ModuleLoader.resolveSync (node:internal/modules/esm/loader:764:52)
    at #resolve (node:internal/modules/esm/loader:700:17)
    at ModuleLoader.getOrCreateModuleJob (node:internal/modules/esm/loader:620:35)
    at ModuleJob.syncLink (node:internal/modules/esm/module_job:143:33) {
  code: 'ERR_MODULE_NOT_FOUND'
```

we can just clean up all modules and reinstall only prod dependencies,
which seems to be very fast and negligible for build times
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @philknows, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the project's infrastructure by migrating to pnpm and removing outdated Eth1-related functionalities. These changes enhance build efficiency, simplify the codebase by eliminating deprecated modules, and centralize execution layer utilities, aligning the project with the current state of the Ethereum network.

Highlights

  • Yarn to pnpm Migration: The project has fully migrated from Yarn to pnpm for dependency management and build orchestration. This includes updates to configuration files (.devcontainer/devcontainer.json, package.json, lerna.json), GitHub Actions workflows, Dockerfiles, and documentation (CONTRIBUTING.md, README.md, docs/).
  • Removal of Eth1 Deposit and Merge Tracking: All Eth1 deposit tracking and merge block tracking logic has been removed. This functionality is no longer necessary as all Ethereum networks have successfully transitioned to Proof-of-Stake. Related files, database repositories, API endpoints, and configuration parameters have been deprecated or eliminated.
  • Execution Layer Utilities Refactoring: Utility functions and types related to the Execution Layer (EL) JSON-RPC client, hex encoding, and JWT handling have been centralized within packages/beacon-node/src/execution/engine/utils.ts, jsonRpcHttpClient.ts, and jwt.ts respectively. These were previously scattered across eth1 related modules.
  • Global Pubkey/Index Caches: The pubkey2index and index2pubkey caches are now globally managed within the BeaconChain instance, rather than being tied to epochCtx. This streamlines access to validator public key and index mappings across various chain operations.
  • Grafana Dashboard Updates: Grafana dashboards have been updated to reflect the removal of Eth1 tracking, with 'Eth1 Stats' and 'Merge Tracking' panels being removed, pluginVersion bumps, and instance name changes.
  • Node.js Engine Version Update: The required Node.js engine version has been updated to ^22.22.0 || ^24.13.0 in package.json.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (13)
    • .github/workflows/benchmark.yml
    • .github/workflows/binaries.yml
    • .github/workflows/docs-check.yml
    • .github/workflows/docs.yml
    • .github/workflows/publish-dev.yml
    • .github/workflows/publish-nextfork.yml
    • .github/workflows/publish-rc.yml
    • .github/workflows/publish-stable.yml
    • .github/workflows/scripts/reject_pnpm_lock_changes.sh
    • .github/workflows/test-bun.yml
    • .github/workflows/test-sim-merge.yml
    • .github/workflows/test-sim.yml
    • .github/workflows/test.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: f6978c0 Previous: null Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.3205 ms/op
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 40.788 us/op
BLS verify - blst 815.26 us/op
BLS verifyMultipleSignatures 3 - blst 1.2357 ms/op
BLS verifyMultipleSignatures 8 - blst 1.6788 ms/op
BLS verifyMultipleSignatures 32 - blst 5.0018 ms/op
BLS verifyMultipleSignatures 64 - blst 9.2836 ms/op
BLS verifyMultipleSignatures 128 - blst 17.374 ms/op
BLS deserializing 10000 signatures 688.60 ms/op
BLS deserializing 100000 signatures 6.9818 s/op
BLS verifyMultipleSignatures - same message - 3 - blst 933.01 us/op
BLS verifyMultipleSignatures - same message - 8 - blst 1.0766 ms/op
BLS verifyMultipleSignatures - same message - 32 - blst 1.7790 ms/op
BLS verifyMultipleSignatures - same message - 64 - blst 2.6586 ms/op
BLS verifyMultipleSignatures - same message - 128 - blst 4.4231 ms/op
BLS aggregatePubkeys 32 - blst 19.591 us/op
BLS aggregatePubkeys 128 - blst 69.145 us/op
getSlashingsAndExits - default max 73.198 us/op
getSlashingsAndExits - 2k 316.16 us/op
isKnown best case - 1 super set check 207.00 ns/op
isKnown normal case - 2 super set checks 207.00 ns/op
isKnown worse case - 16 super set checks 208.00 ns/op
InMemoryCheckpointStateCache - add get delete 2.4000 us/op
validate api signedAggregateAndProof - struct 1.3415 ms/op
validate gossip signedAggregateAndProof - struct 1.3486 ms/op
batch validate gossip attestation - vc 640000 - chunk 32 114.52 us/op
batch validate gossip attestation - vc 640000 - chunk 64 103.21 us/op
batch validate gossip attestation - vc 640000 - chunk 128 95.166 us/op
batch validate gossip attestation - vc 640000 - chunk 256 89.642 us/op
bytes32 toHexString 353.00 ns/op
bytes32 Buffer.toString(hex) 250.00 ns/op
bytes32 Buffer.toString(hex) from Uint8Array 324.00 ns/op
bytes32 Buffer.toString(hex) + 0x 246.00 ns/op
Object access 1 prop 0.11500 ns/op
Map access 1 prop 0.11300 ns/op
Object get x1000 5.2260 ns/op
Map get x1000 0.37000 ns/op
Object set x1000 28.217 ns/op
Map set x1000 19.812 ns/op
Return object 10000 times 0.22670 ns/op
Throw Error 10000 times 3.9759 us/op
toHex 134.80 ns/op
Buffer.from 130.87 ns/op
shared Buffer 85.321 ns/op
fastMsgIdFn sha256 / 200 bytes 1.7690 us/op
fastMsgIdFn h32 xxhash / 200 bytes 189.00 ns/op
fastMsgIdFn h64 xxhash / 200 bytes 263.00 ns/op
fastMsgIdFn sha256 / 1000 bytes 5.8120 us/op
fastMsgIdFn h32 xxhash / 1000 bytes 284.00 ns/op
fastMsgIdFn h64 xxhash / 1000 bytes 307.00 ns/op
fastMsgIdFn sha256 / 10000 bytes 50.539 us/op
fastMsgIdFn h32 xxhash / 10000 bytes 1.3590 us/op
fastMsgIdFn h64 xxhash / 10000 bytes 906.00 ns/op
100 bytes - compress - snappyjs 1.0869 us/op
100 bytes - compress - snappy 1.1522 us/op
100 bytes - compress - snappy-wasm 727.42 ns/op
100 bytes - compress - snappy-wasm - prealloc 1.2559 us/op
200 bytes - compress - snappyjs 1.8597 us/op
200 bytes - compress - snappy 1.7188 us/op
200 bytes - compress - snappy-wasm 1.4200 us/op
200 bytes - compress - snappy-wasm - prealloc 1.5419 us/op
300 bytes - compress - snappyjs 1.8988 us/op
300 bytes - compress - snappy 1.2697 us/op
300 bytes - compress - snappy-wasm 726.08 ns/op
300 bytes - compress - snappy-wasm - prealloc 2.0013 us/op
400 bytes - compress - snappyjs 2.0447 us/op
400 bytes - compress - snappy 1.4107 us/op
400 bytes - compress - snappy-wasm 1.1517 us/op
400 bytes - compress - snappy-wasm - prealloc 1.9685 us/op
500 bytes - compress - snappyjs 2.9133 us/op
500 bytes - compress - snappy 1.5144 us/op
500 bytes - compress - snappy-wasm 997.95 ns/op
500 bytes - compress - snappy-wasm - prealloc 1.2303 us/op
1000 bytes - compress - snappyjs 4.3486 us/op
1000 bytes - compress - snappy 1.6374 us/op
1000 bytes - compress - snappy-wasm 1.8287 us/op
1000 bytes - compress - snappy-wasm - prealloc 1.9687 us/op
10000 bytes - compress - snappyjs 24.447 us/op
10000 bytes - compress - snappy 19.595 us/op
10000 bytes - compress - snappy-wasm 18.951 us/op
10000 bytes - compress - snappy-wasm - prealloc 34.684 us/op
100 bytes - uncompress - snappyjs 739.54 ns/op
100 bytes - uncompress - snappy 1.1759 us/op
100 bytes - uncompress - snappy-wasm 893.25 ns/op
100 bytes - uncompress - snappy-wasm - prealloc 812.54 ns/op
200 bytes - uncompress - snappyjs 869.23 ns/op
200 bytes - uncompress - snappy 1.2914 us/op
200 bytes - uncompress - snappy-wasm 1.2563 us/op
200 bytes - uncompress - snappy-wasm - prealloc 1.4065 us/op
300 bytes - uncompress - snappyjs 988.36 ns/op
300 bytes - uncompress - snappy 1.2874 us/op
300 bytes - uncompress - snappy-wasm 1.6371 us/op
300 bytes - uncompress - snappy-wasm - prealloc 1.7082 us/op
400 bytes - uncompress - snappyjs 1.2112 us/op
400 bytes - uncompress - snappy 1.3245 us/op
400 bytes - uncompress - snappy-wasm 1.2393 us/op
400 bytes - uncompress - snappy-wasm - prealloc 1.3142 us/op
500 bytes - uncompress - snappyjs 1.1760 us/op
500 bytes - uncompress - snappy 2.2904 us/op
500 bytes - uncompress - snappy-wasm 1.0383 us/op
500 bytes - uncompress - snappy-wasm - prealloc 1.2099 us/op
1000 bytes - uncompress - snappyjs 3.0671 us/op
1000 bytes - uncompress - snappy 1.6298 us/op
1000 bytes - uncompress - snappy-wasm 1.2857 us/op
1000 bytes - uncompress - snappy-wasm - prealloc 1.5879 us/op
10000 bytes - uncompress - snappyjs 18.703 us/op
10000 bytes - uncompress - snappy 32.169 us/op
10000 bytes - uncompress - snappy-wasm 22.157 us/op
10000 bytes - uncompress - snappy-wasm - prealloc 24.416 us/op
send data - 1000 256B messages 13.326 ms/op
send data - 1000 512B messages 17.639 ms/op
send data - 1000 1024B messages 21.973 ms/op
send data - 1000 1200B messages 23.032 ms/op
send data - 1000 2048B messages 23.069 ms/op
send data - 1000 4096B messages 29.118 ms/op
send data - 1000 16384B messages 103.18 ms/op
send data - 1000 65536B messages 292.13 ms/op
enrSubnets - fastDeserialize 64 bits 883.00 ns/op
enrSubnets - ssz BitVector 64 bits 340.00 ns/op
enrSubnets - fastDeserialize 4 bits 127.00 ns/op
enrSubnets - ssz BitVector 4 bits 336.00 ns/op
prioritizePeers score -10:0 att 32-0.1 sync 2-0 229.36 us/op
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 253.50 us/op
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 364.21 us/op
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 678.08 us/op
prioritizePeers score 0:0 att 64-1 sync 4-1 827.80 us/op
array of 16000 items push then shift 1.5711 us/op
LinkedList of 16000 items push then shift 7.3670 ns/op
array of 16000 items push then pop 74.878 ns/op
LinkedList of 16000 items push then pop 7.0890 ns/op
array of 24000 items push then shift 2.3305 us/op
LinkedList of 24000 items push then shift 7.4330 ns/op
array of 24000 items push then pop 105.93 ns/op
LinkedList of 24000 items push then pop 7.2100 ns/op
intersect bitArray bitLen 8 5.7580 ns/op
intersect array and set length 8 33.169 ns/op
intersect bitArray bitLen 128 28.138 ns/op
intersect array and set length 128 541.11 ns/op
bitArray.getTrueBitIndexes() bitLen 128 949.00 ns/op
bitArray.getTrueBitIndexes() bitLen 248 1.7050 us/op
bitArray.getTrueBitIndexes() bitLen 512 3.5100 us/op
Full columns - reconstruct all 6 blobs 237.24 us/op
Full columns - reconstruct half of the blobs out of 6 90.127 us/op
Full columns - reconstruct single blob out of 6 39.873 us/op
Half columns - reconstruct all 6 blobs 263.93 ms/op
Half columns - reconstruct half of the blobs out of 6 131.41 ms/op
Half columns - reconstruct single blob out of 6 49.172 ms/op
Full columns - reconstruct all 10 blobs 315.96 us/op
Full columns - reconstruct half of the blobs out of 10 170.55 us/op
Full columns - reconstruct single blob out of 10 30.868 us/op
Half columns - reconstruct all 10 blobs 439.26 ms/op
Half columns - reconstruct half of the blobs out of 10 221.83 ms/op
Half columns - reconstruct single blob out of 10 47.924 ms/op
Full columns - reconstruct all 20 blobs 956.24 us/op
Full columns - reconstruct half of the blobs out of 20 316.83 us/op
Full columns - reconstruct single blob out of 20 30.858 us/op
Half columns - reconstruct all 20 blobs 866.13 ms/op
Half columns - reconstruct half of the blobs out of 20 434.66 ms/op
Half columns - reconstruct single blob out of 20 48.643 ms/op
Set add up to 64 items then delete first 2.0229 us/op
OrderedSet add up to 64 items then delete first 2.9729 us/op
Set add up to 64 items then delete last 2.2945 us/op
OrderedSet add up to 64 items then delete last 3.3575 us/op
Set add up to 64 items then delete middle 2.2640 us/op
OrderedSet add up to 64 items then delete middle 4.8571 us/op
Set add up to 128 items then delete first 4.5500 us/op
OrderedSet add up to 128 items then delete first 6.6696 us/op
Set add up to 128 items then delete last 4.6003 us/op
OrderedSet add up to 128 items then delete last 6.6943 us/op
Set add up to 128 items then delete middle 4.4645 us/op
OrderedSet add up to 128 items then delete middle 12.878 us/op
Set add up to 256 items then delete first 9.6590 us/op
OrderedSet add up to 256 items then delete first 13.863 us/op
Set add up to 256 items then delete last 9.1574 us/op
OrderedSet add up to 256 items then delete last 13.713 us/op
Set add up to 256 items then delete middle 9.3921 us/op
OrderedSet add up to 256 items then delete middle 40.466 us/op
pass gossip attestations to forkchoice per slot 2.4235 ms/op
forkChoice updateHead vc 100000 bc 64 eq 0 490.75 us/op
forkChoice updateHead vc 600000 bc 64 eq 0 2.9067 ms/op
forkChoice updateHead vc 1000000 bc 64 eq 0 4.8581 ms/op
forkChoice updateHead vc 600000 bc 320 eq 0 2.9766 ms/op
forkChoice updateHead vc 600000 bc 1200 eq 0 2.9953 ms/op
forkChoice updateHead vc 600000 bc 7200 eq 0 3.2712 ms/op
forkChoice updateHead vc 600000 bc 64 eq 1000 3.3907 ms/op
forkChoice updateHead vc 600000 bc 64 eq 10000 3.4989 ms/op
forkChoice updateHead vc 600000 bc 64 eq 300000 8.9877 ms/op
computeDeltas 1400000 validators 0% inactive 14.429 ms/op
computeDeltas 1400000 validators 10% inactive 13.479 ms/op
computeDeltas 1400000 validators 20% inactive 12.504 ms/op
computeDeltas 1400000 validators 50% inactive 9.8251 ms/op
computeDeltas 2100000 validators 0% inactive 21.688 ms/op
computeDeltas 2100000 validators 10% inactive 20.236 ms/op
computeDeltas 2100000 validators 20% inactive 18.893 ms/op
computeDeltas 2100000 validators 50% inactive 14.786 ms/op
altair processAttestation - 250000 vs - 7PWei normalcase 1.8479 ms/op
altair processAttestation - 250000 vs - 7PWei worstcase 2.6613 ms/op
altair processAttestation - setStatus - 1/6 committees join 115.45 us/op
altair processAttestation - setStatus - 1/3 committees join 227.67 us/op
altair processAttestation - setStatus - 1/2 committees join 319.02 us/op
altair processAttestation - setStatus - 2/3 committees join 405.08 us/op
altair processAttestation - setStatus - 4/5 committees join 558.28 us/op
altair processAttestation - setStatus - 100% committees join 679.97 us/op
altair processBlock - 250000 vs - 7PWei normalcase 3.3763 ms/op
altair processBlock - 250000 vs - 7PWei normalcase hashState 16.037 ms/op
altair processBlock - 250000 vs - 7PWei worstcase 23.915 ms/op
altair processBlock - 250000 vs - 7PWei worstcase hashState 53.380 ms/op
phase0 processBlock - 250000 vs - 7PWei normalcase 1.5344 ms/op
phase0 processBlock - 250000 vs - 7PWei worstcase 18.995 ms/op
altair processEth1Data - 250000 vs - 7PWei normalcase 385.29 us/op
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 6.1300 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 38.856 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 10.522 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 6.3280 us/op
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 140.45 us/op
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.8384 ms/op
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.3357 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.4853 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 4.6909 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.5678 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.4622 ms/op
Tree 40 250000 create 360.57 ms/op
Tree 40 250000 get(125000) 121.22 ns/op
Tree 40 250000 set(125000) 1.2010 us/op
Tree 40 250000 toArray() 14.511 ms/op
Tree 40 250000 iterate all - toArray() + loop 12.644 ms/op
Tree 40 250000 iterate all - get(i) 42.364 ms/op
Array 250000 create 2.4246 ms/op
Array 250000 clone - spread 788.00 us/op
Array 250000 get(125000) 0.34200 ns/op
Array 250000 set(125000) 0.42000 ns/op
Array 250000 iterate all - loop 61.406 us/op
phase0 afterProcessEpoch - 250000 vs - 7PWei 40.258 ms/op
Array.fill - length 1000000 2.7506 ms/op
Array push - length 1000000 9.4676 ms/op
Array.get 0.21136 ns/op
Uint8Array.get 0.21713 ns/op
phase0 beforeProcessEpoch - 250000 vs - 7PWei 15.919 ms/op
altair processEpoch - mainnet_e81889 225.53 ms/op
mainnet_e81889 - altair beforeProcessEpoch 15.323 ms/op
mainnet_e81889 - altair processJustificationAndFinalization 5.9290 us/op
mainnet_e81889 - altair processInactivityUpdates 3.7325 ms/op
mainnet_e81889 - altair processRewardsAndPenalties 17.702 ms/op
mainnet_e81889 - altair processRegistryUpdates 627.00 ns/op
mainnet_e81889 - altair processSlashings 165.00 ns/op
mainnet_e81889 - altair processEth1DataReset 204.00 ns/op
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.8341 ms/op
mainnet_e81889 - altair processSlashingsReset 789.00 ns/op
mainnet_e81889 - altair processRandaoMixesReset 996.00 ns/op
mainnet_e81889 - altair processHistoricalRootsUpdate 160.00 ns/op
mainnet_e81889 - altair processParticipationFlagUpdates 494.00 ns/op
mainnet_e81889 - altair processSyncCommitteeUpdates 125.00 ns/op
mainnet_e81889 - altair afterProcessEpoch 43.134 ms/op
capella processEpoch - mainnet_e217614 831.54 ms/op
mainnet_e217614 - capella beforeProcessEpoch 71.323 ms/op
mainnet_e217614 - capella processJustificationAndFinalization 5.3010 us/op
mainnet_e217614 - capella processInactivityUpdates 15.848 ms/op
mainnet_e217614 - capella processRewardsAndPenalties 97.340 ms/op
mainnet_e217614 - capella processRegistryUpdates 5.5700 us/op
mainnet_e217614 - capella processSlashings 165.00 ns/op
mainnet_e217614 - capella processEth1DataReset 175.00 ns/op
mainnet_e217614 - capella processEffectiveBalanceUpdates 19.864 ms/op
mainnet_e217614 - capella processSlashingsReset 813.00 ns/op
mainnet_e217614 - capella processRandaoMixesReset 1.0520 us/op
mainnet_e217614 - capella processHistoricalRootsUpdate 173.00 ns/op
mainnet_e217614 - capella processParticipationFlagUpdates 501.00 ns/op
mainnet_e217614 - capella afterProcessEpoch 112.83 ms/op
phase0 processEpoch - mainnet_e58758 229.36 ms/op
mainnet_e58758 - phase0 beforeProcessEpoch 46.429 ms/op
mainnet_e58758 - phase0 processJustificationAndFinalization 5.1070 us/op
mainnet_e58758 - phase0 processRewardsAndPenalties 17.281 ms/op
mainnet_e58758 - phase0 processRegistryUpdates 2.6670 us/op
mainnet_e58758 - phase0 processSlashings 171.00 ns/op
mainnet_e58758 - phase0 processEth1DataReset 167.00 ns/op
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.0733 ms/op
mainnet_e58758 - phase0 processSlashingsReset 912.00 ns/op
mainnet_e58758 - phase0 processRandaoMixesReset 1.0580 us/op
mainnet_e58758 - phase0 processHistoricalRootsUpdate 167.00 ns/op
mainnet_e58758 - phase0 processParticipationRecordUpdates 938.00 ns/op
mainnet_e58758 - phase0 afterProcessEpoch 34.420 ms/op
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.7470 ms/op
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 8.9677 ms/op
altair processInactivityUpdates - 250000 normalcase 12.017 ms/op
altair processInactivityUpdates - 250000 worstcase 12.055 ms/op
phase0 processRegistryUpdates - 250000 normalcase 4.8420 us/op
phase0 processRegistryUpdates - 250000 badcase_full_deposits 221.00 us/op
phase0 processRegistryUpdates - 250000 worstcase 0.5 70.196 ms/op
altair processRewardsAndPenalties - 250000 normalcase 16.519 ms/op
altair processRewardsAndPenalties - 250000 worstcase 16.040 ms/op
phase0 getAttestationDeltas - 250000 normalcase 6.8066 ms/op
phase0 getAttestationDeltas - 250000 worstcase 5.9204 ms/op
phase0 processSlashings - 250000 worstcase 76.561 us/op
altair processSyncCommitteeUpdates - 250000 10.956 ms/op
BeaconState.hashTreeRoot - No change 204.00 ns/op
BeaconState.hashTreeRoot - 1 full validator 70.331 us/op
BeaconState.hashTreeRoot - 32 full validator 1.1641 ms/op
BeaconState.hashTreeRoot - 512 full validator 8.4271 ms/op
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 124.94 us/op
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.6534 ms/op
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 16.327 ms/op
BeaconState.hashTreeRoot - 1 balances 69.672 us/op
BeaconState.hashTreeRoot - 32 balances 874.37 us/op
BeaconState.hashTreeRoot - 512 balances 5.8877 ms/op
BeaconState.hashTreeRoot - 250000 balances 130.61 ms/op
aggregationBits - 2048 els - zipIndexesInBitList 20.734 us/op
regular array get 100000 times 24.521 us/op
wrappedArray get 100000 times 24.615 us/op
arrayWithProxy get 100000 times 18.508 ms/op
ssz.Root.equals 23.986 ns/op
byteArrayEquals 23.322 ns/op
Buffer.compare 10.082 ns/op
processSlot - 1 slots 10.163 us/op
processSlot - 32 slots 1.9775 ms/op
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 2.9709 ms/op
getCommitteeAssignments - req 1 vs - 250000 vc 1.8637 ms/op
getCommitteeAssignments - req 100 vs - 250000 vc 3.6268 ms/op
getCommitteeAssignments - req 1000 vs - 250000 vc 3.8714 ms/op
findModifiedValidators - 10000 modified validators 461.06 ms/op
findModifiedValidators - 1000 modified validators 528.08 ms/op
findModifiedValidators - 100 modified validators 293.36 ms/op
findModifiedValidators - 10 modified validators 173.52 ms/op
findModifiedValidators - 1 modified validators 164.09 ms/op
findModifiedValidators - no difference 158.83 ms/op
migrate state 1500000 validators, 3400 modified, 2000 new 1.1281 s/op
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.2500 ns/op
state getBlockRootAtSlot - 250000 vs - 7PWei 627.58 ns/op
computeProposerIndex 100000 validators 1.5307 ms/op
getNextSyncCommitteeIndices 1000 validators 116.47 ms/op
getNextSyncCommitteeIndices 10000 validators 116.34 ms/op
getNextSyncCommitteeIndices 100000 validators 116.67 ms/op
computeProposers - vc 250000 655.35 us/op
computeEpochShuffling - vc 250000 40.745 ms/op
getNextSyncCommittee - vc 250000 10.411 ms/op
nodejs block root to RootHex using toHex 136.32 ns/op
nodejs block root to RootHex using toRootHex 94.762 ns/op
nodejs fromHex(blob) 195.28 us/op
nodejs fromHexInto(blob) 721.03 us/op
nodejs block root to RootHex using the deprecated toHexString 206.90 ns/op
browser block root to RootHex using toHex 161.37 ns/op
browser block root to RootHex using toRootHex 151.46 ns/op
browser fromHex(blob) 1.2133 ms/op
browser fromHexInto(blob) 693.81 us/op
browser block root to RootHex using the deprecated toHexString 558.26 ns/op

by benchmarkbot/action

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant changes, primarily migrating the project's package manager from Yarn to pnpm and completely removing the Eth1 deposit tracking and merge block tracking logic from the beacon node. This aligns with the current state of Ethereum, where all networks are post-merge, simplifying the codebase. Additionally, there's a substantial refactoring of configuration and pubkey cache passing, making these dependencies more explicit across various functions. The changes also include updates to GitHub Actions workflows, documentation, and Grafana dashboards to reflect these architectural shifts. Overall, these are positive changes that streamline the project and remove deprecated functionality.

Comment on lines 9 to +10
} from "@lodestar/fork-choice";
import {ForkSeq, SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY} from "@lodestar/params";
import {ForkSeq} from "@lodestar/params";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The removal of assertValidTerminalPowBlock and SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY imports signifies a major architectural change, as the beacon node no longer performs these Eth1-related merge checks. This simplifies the execution payload verification process, assuming all networks are post-merge.

Comment on lines +208 to 211
// Post-merge, we're always safe to optimistically import
case ExecutionPayloadStatus.ACCEPTED:
case ExecutionPayloadStatus.SYNCING: {
// Check if the entire segment was deemed safe or, this block specifically itself if not in
// the safeSlotsToImportOptimistically window of current slot, then we can import else
// we need to throw and not import his block
const safeSlotsToImportOptimistically =
opts.safeSlotsToImportOptimistically ?? SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY;
if (!isOptimisticallySafe && blockInput.slot + safeSlotsToImportOptimistically >= currentSlot) {
const execError = new BlockError(block, {
code: BlockErrorCode.EXECUTION_ENGINE_ERROR,
execStatus: ExecutionPayloadStatus.UNSAFE_OPTIMISTIC_STATUS,
errorMessage: `not safe to import ${execResult.status} payload within ${opts.safeSlotsToImportOptimistically} of currentSlot`,
});
return {executionStatus: null, execError} as VerifyBlockExecutionResponse;
}

case ExecutionPayloadStatus.SYNCING:
return {executionStatus: ExecutionStatus.Syncing, execError: null};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The removal of the optimistic import check for ACCEPTED and SYNCING statuses, along with the UNSAFE_OPTIMISTIC_STATUS error, is a direct consequence of assuming all networks are post-merge. This simplifies the logic and removes unnecessary complexity.

Comment on lines 90 to +103
// will either validate or prune invalid blocks
//
// We need to track and keep updating if its safe to optimistically import these blocks.
// The following is how we determine for a block if its safe:
//
// (but we need to modify this check for this segment of blocks because it checks if the
// parent of any block imported in forkchoice is post-merge and currently we could only
// have blocks[0]'s parent imported in the chain as this is no longer one by one verify +
// import.)
//
//
// When to import such blocks:
// From: https://github.com/ethereum/consensus-specs/pull/2844
// A block MUST NOT be optimistically imported, unless either of the following
// conditions are met:
//
// 1. Parent of the block has execution
//
// Since with the sync optimizations, the previous block might not have been in the
// forkChoice yet, so the below check could fail for safeSlotsToImportOptimistically
//
// Luckily, we can depend on the preState0 to see if we are already post merge w.r.t
// the blocks we are importing.
//
// Or in other words if
// - block status is syncing
// - and we are not in a post merge world and is parent is not optimistically safe
// - and we are syncing close to the chain head i.e. clock slot
// - and parent is optimistically safe
//
// then throw error
//
//
// - if we haven't yet imported a post merge ancestor in forkchoice i.e.
// - and we are syncing close to the clockSlot, i.e. merge Transition could be underway
//
//
// 2. The current slot (as per the system clock) is at least
// SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY ahead of the slot of the block being
// imported.
// This means that the merge transition could be underway and we can't afford to import
// a block which is not fully validated as it could affect liveliness of the network.
//
//
// For this segment of blocks:
// We are optimistically safe with respect to this entire block segment if:
// - all the blocks are way behind the current slot
// - or we have already imported a post-merge parent of first block of this chain in forkchoice
const currentSlot = chain.clock.currentSlot;
const safeSlotsToImportOptimistically = opts.safeSlotsToImportOptimistically ?? SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY;
let isOptimisticallySafe =
parentBlock.executionStatus !== ExecutionStatus.PreMerge ||
lastBlock.slot + safeSlotsToImportOptimistically < currentSlot;

for (let blockIndex = 0; blockIndex < blockInputs.length; blockIndex++) {
const blockInput = blockInputs[blockIndex];
// If blocks are invalid in consensus the main promise could resolve before this loop ends.
// In that case stop sending blocks to execution engine
if (signal.aborted) {
throw new ErrorAborted("verifyBlockExecutionPayloads");
}
const verifyResponse = await verifyBlockExecutionPayload(
chain,
blockInput,
preState0,
opts,
isOptimisticallySafe,
currentSlot
);
const verifyResponse = await verifyBlockExecutionPayload(chain, blockInput, preState0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The removal of complex logic related to isOptimisticallySafe and SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY simplifies the execution payload verification process. This change reflects the assumption that all networks are now post-merge, making these optimistic import conditions unnecessary.

import {bellatrix, electra} from "@lodestar/types";
import {ErrorAborted, Logger, toRootHex} from "@lodestar/utils";
import {IEth1ForBlockProduction} from "../../eth1/index.js";
import {ExecutionPayloadStatus, IExecutionEngine} from "../../execution/engine/interface.js";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Removing the IEth1ForBlockProduction import and eth1 property from VerifyBlockExecutionPayloadModules confirms that the execution payload verification process is now entirely decoupled from Eth1 interaction.

Comment on lines 18 to 19
import {CAPELLA_OWL_BANNER} from "./utils/ownBanner.js";
import {POS_PANDA_MERGE_TRANSITION_BANNER} from "./utils/pandaMergeTransitionBanner.js";
import {FULU_ZEBRA_BANNER} from "./utils/zebraBanner.js";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The removal of POS_PANDA_MERGE_TRANSITION_BANNER and logOnPowBlock function indicates a complete removal of Eth1 merge transition logging, which is consistent with the overall deprecation of Eth1 merge tracking.

"wideLayout": true
},
"pluginVersion": "9.3.2",
"pluginVersion": "10.4.1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Updating the pluginVersion from 9.3.2 to 10.4.1 indicates a significant Grafana version upgrade. Ensure all dashboard features and queries are compatible with the new version.

Comment on lines 136 to 139
execAborted: null,
executionStatuses,
executionTime,
mergeBlockFound,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The removal of mergeBlockFound from the returned SegmentExecStatus is consistent with the overall deprecation of Eth1 merge tracking. This streamlines the output of the execution payload verification.

"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding axisBorderShow: false to custom options for Grafana panels is a good visual consistency improvement, especially if it aligns with the overall dashboard design.

Comment on lines +19 to 21
config: BeaconConfig,
index2pubkey: Index2PubkeyCache,
bls: IBlsVerifier,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding config and index2pubkey as explicit parameters to verifyBlocksSignatures is a good refactoring. It makes the function's dependencies clear and improves testability by allowing these to be easily mocked or provided.

export async function verifyBlocksSignatures(
  config: BeaconConfig,
  index2pubkey: Index2PubkeyCache,
  bls: IBlsVerifier,

Comment on lines +46 to +54
config,
index2pubkey,
currentSyncCommitteeIndexed,
block,
indexedAttestationsByBlock[i],
{
skipProposerSignature: opts.validProposerSignature,
}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Passing config, index2pubkey, and currentSyncCommitteeIndexed explicitly to getBlockSignatureSets improves the function's purity and makes its dependencies explicit. This is a good step towards better modularity.

        bls.verifySignatureSets(
          getBlockSignatureSets(
            config,
            index2pubkey,
            currentSyncCommitteeIndexed,
            block,
            indexedAttestationsByBlock[i],
            {
              skipProposerSignature: opts.validProposerSignature,
            }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants