Skip to content

feat: add flat file storage for blobs and data columns#8899

Draft
wemeetagain wants to merge 8 commits intounstablefrom
research-flat-file-storage
Draft

feat: add flat file storage for blobs and data columns#8899
wemeetagain wants to merge 8 commits intounstablefrom
research-flat-file-storage

Conversation

@wemeetagain
Copy link
Member

@wemeetagain wemeetagain commented Feb 12, 2026

Move blob sidecar and data column storage from LevelDB to the filesystem, inspired by Prysm's approach. Files are stored as:

blob_sidecars/<padSlot>/0x<rootHex>.ssz
data_columns/<padSlot>/0x<rootHex>.dcol

Key design decisions

  • Atomic writes via .part temp file + datasync + rename for crash safety
  • .dcol binary format with 149-byte header, 128-bit bitmap, per-column Snappy compression, and offset table for O(1) random column access
  • Targeted fd.read() for column reads — only reads header (149B) + offset table (~516B) + requested columns (~5-8 KB each) instead of the full ~1 MB file
  • In-memory ExistenceCache tracks (slot, root) presence for blobs and 128-bit column bitmaps, doubling as the slot→root index for finalized canonical lookups
  • Per-root write locking via promise chaining for concurrent column merges
  • Feature flag --chain.flatFileStorage (default: true)
  • No hot/cold migration when enabled — data is written to its final location; pruning via directory-level deletion

.dcol format layout

Header (149 bytes):
  [version: 1B = 0x01][_reserved: 4B][bitmap: 16B][block_root: 32B][slot: 8B BE][reserved: 88B]

Offset table ((N+1) * 4 bytes, N = popcount(bitmap)):
  [offset_0: 4B BE] ... [offset_N: 4B BE]  (sentinel = total data region size)

Data region:
  [snappy(col_0)][snappy(col_1)]...[snappy(col_N-1)]

Each column is independently Snappy block-compressed (~40-50% size reduction). Reading a single column decompresses only that column's ~5-8 KB, not the full file.

Disk usage impact

Metric LevelDB Flat file
Per block (128 columns) ~1.69 MB ~0.85-1.1 MB (Snappy compressed)
Daily (7,200 slots) ~11.9 GB ~7 GB
18-day retention ~216 GB ~125 GB
Write amplification ~15-50x (LSM compaction) ~1x

Files

  • packages/beacon-node/src/db/flatFileStore/ — all flat file storage implementation
  • packages/beacon-node/test/unit/db/flatFileStore/ — unit tests
  • docs/design-flat-file-storage.md — full design document

🤖 Generated with Claude Code

Move blob sidecar and data column storage from LevelDB to the filesystem,
inspired by Prysm's approach. Files are stored as:
  blob_sidecars/<padSlot>/0x<rootHex>.ssz
  data_columns/<padSlot>/0x<rootHex>.dcol

Key design decisions:
- Atomic writes via .part temp file + datasync + rename for crash safety
- .dcol binary format with 149-byte header and 128-bit bitmap for O(1)
  random column access via popcount-based offset calculation
- In-memory ExistenceCache tracks (slot, root) presence for blobs and
  column bitmaps, doubling as the slot-to-root index for finalized
  canonical lookups (no separate index needed)
- Per-root write locking via promise chaining for concurrent column merges
- Feature flag --chain.flatFileStorage (default: true)
- Hot/cold migration skipped entirely when enabled (data is already in
  final location); pruning via directory-level deletion

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@wemeetagain wemeetagain requested a review from a team as a code owner February 12, 2026 22:25
@wemeetagain wemeetagain marked this pull request as draft February 12, 2026 22:25
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @wemeetagain, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a fundamental shift in how blob sidecars and data columns are persisted, moving from a LevelDB-based key-value store to a direct filesystem-based flat file storage. This architectural change is designed to address performance bottlenecks, particularly around pruning and write amplification, and to streamline data handling for large, ephemeral data types. By leveraging the filesystem directly, the system gains efficiency in storage, retrieval, and lifecycle management, paving the way for improved node stability and responsiveness.

Highlights

  • Filesystem Storage for Blobs and Data Columns: Blob sidecars and data columns are now stored directly on the filesystem instead of in LevelDB, inspired by Prysm's approach. This change aims to improve performance and simplify data management for large, write-once, read-rarely data.
  • Custom .dcol Binary Format: A new custom binary format (.dcol) has been introduced for data columns. It features a fixed-size 149-byte header and a 128-bit bitmap, enabling O(1) random access to individual columns and supporting incremental writes.
  • Atomic Writes and Crash Safety: All file writes for blobs and data columns now employ an atomic write pattern using .part temporary files, datasync, and rename operations. This ensures crash-safe writes, preventing data corruption during unexpected process termination.
  • In-Memory Existence Cache: An ExistenceCache has been implemented to track the presence of blobs and column bitmaps in memory. This significantly reduces filesystem stat() and open() calls, preventing thousands of unnecessary syscalls per second and improving read performance.
  • Performance Improvements: The new flat file storage system offers substantial performance gains, including 10-100x faster pruning, a drastic reduction in write amplification (from ~15-50x to ~1x), lower disk usage overhead, and faster read latencies compared to LevelDB.
  • Simplified Archive Pipeline and Migration Path: The archive pipeline has been simplified as data is written directly to its final location, eliminating the need for hot/cold migration. A feature flag (--chain.flatFileStorage) and a phased migration plan are included for a safe transition from LevelDB.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/design-flat-file-storage.md
    • Added a comprehensive design document detailing the rationale, architecture, file formats, write/read paths, caching, pruning, migration, and performance analysis of the new flat file storage system.
  • packages/beacon-node/src/chain/archiveStore/utils/archiveBlocks.ts
    • Modified archive block logic to conditionally use the new flat file store for blob and data column pruning and non-canonical deletion, bypassing LevelDB operations when the feature is enabled.
  • packages/beacon-node/src/chain/blocks/writeBlockInputToDb.ts
    • Updated block input writing to conditionally store blob sidecars and data columns in the flat file store, utilizing binary passthrough for efficiency, or falling back to LevelDB if the flat file store is not active.
  • packages/beacon-node/src/chain/chain.ts
    • Modified BeaconChain methods to retrieve blob sidecars and data columns from the new flat file store if it is enabled, otherwise falling back to LevelDB repositories.
  • packages/beacon-node/src/chain/options.ts
    • Added a flatFileStorage boolean option to IChainOptions with a default value of true, enabling the new storage system by default.
  • packages/beacon-node/src/db/beacon.ts
    • Introduced flatFileStore property to BeaconDb and added initFlatFileStore and modified close methods to manage its lifecycle. Updated pruneHotDb to use the flat file store's pruning mechanism if active.
  • packages/beacon-node/src/db/flatFileStore/atomicWrite.ts
    • Added utility functions for atomicWrite (ensuring crash-safe file operations), padSlot (for consistent directory naming), and cleanupPartFiles (to remove incomplete writes on startup).
  • packages/beacon-node/src/db/flatFileStore/blobStore.ts
    • Implemented BlobStore class to manage filesystem storage for blob sidecars, including methods for put, get, delete, streaming entries, and pruning, utilizing an internal write lock and the existence cache.
  • packages/beacon-node/src/db/flatFileStore/columnStore.ts
    • Implemented ColumnStore class for managing data column files, supporting binary put/get, incremental merging of columns, deletion, and pruning, with per-root write locking and cache integration.
  • packages/beacon-node/src/db/flatFileStore/dcolFormat.ts
    • Defined the .dcol binary format for data columns, including header structure, bitmap helpers, and functions for encoding, decoding, and merging column data within the file.
  • packages/beacon-node/src/db/flatFileStore/existenceCache.ts
    • Created ExistenceCache to maintain an in-memory record of blob and column presence, reducing disk I/O. Includes methods for setting/getting/removing entries and rebuilding the cache from disk.
  • packages/beacon-node/src/db/flatFileStore/flatFileStore.ts
    • Implemented FlatFileStore as the main entry point for the new storage system, coordinating BlobStore, ColumnStore, and ExistenceCache, and providing the IFlatFileStore interface.
  • packages/beacon-node/src/db/flatFileStore/index.ts
    • Exported FlatFileStore and IFlatFileStore for external use.
  • packages/beacon-node/src/db/flatFileStore/interface.ts
    • Defined the IFlatFileStore interface, outlining the contract for the new flat file storage system.
  • packages/beacon-node/src/db/index.ts
    • Exported the new FlatFileStore and IFlatFileStore from the main db module.
  • packages/beacon-node/src/db/interface.ts
    • Updated IBeaconDb to include an optional flatFileStore property and an initFlatFileStore method.
  • packages/beacon-node/src/network/reqresp/handlers/blobSidecarsByRange.ts
    • Modified the onBlobSidecarsByRange handler to fetch blob sidecars from the flatFileStore if enabled, optimizing data retrieval for both finalized and unfinalized ranges.
  • packages/beacon-node/src/network/reqresp/handlers/dataColumnSidecarsByRange.ts
    • Modified the onDataColumnSidecarsByRange handler to fetch data columns from the flatFileStore if enabled, improving efficiency for range requests.
  • packages/beacon-node/src/node/nodejs.ts
    • Added logic to initialize the flatFileStore during beacon node startup if the flatFileStorage option is enabled.
  • packages/beacon-node/test/mocks/mockedBeaconDb.ts
    • Updated the mocked BeaconDb to include a flatFileStore property for testing purposes.
  • packages/beacon-node/test/unit/chain/archiveStore/blockArchiver.test.ts
    • Added a new test case to verify that the blockArchiver correctly prunes blobs and columns using the flatFileStore's pruning methods.
  • packages/beacon-node/test/unit/db/flatFileStore/atomicWrite.test.ts
    • Added unit tests for the atomicWrite, cleanupPartFiles, and padSlot utility functions, ensuring their correctness and crash-safety.
  • packages/beacon-node/test/unit/db/flatFileStore/dcolFormat.test.ts
    • Added comprehensive unit tests for the .dcol binary format, covering bitmap operations, header encoding/decoding, column offset calculation, and full file encoding/merging.
  • packages/beacon-node/test/unit/db/flatFileStore/existenceCache.test.ts
    • Added unit tests for the ExistenceCache class, verifying its ability to track blob and column presence, accumulate column bitmaps, and evict old entries.
  • packages/beacon-node/test/unit/db/flatFileStore/flatFileStore.test.ts
    • Added integration tests for the FlatFileStore, covering the put/get/delete operations for blobs and columns, incremental column merging, pruning, non-canonical deletion, and cache rebuilding after a simulated restart.
  • packages/beacon-node/test/unit/db/flatFileStore/flatFileStoreHandlers.test.ts
    • Added integration tests to ensure that the onBlobSidecarsByRange and onDataColumnSidecarsByRange network handlers correctly interact with the new FlatFileStore for serving data.
  • packages/cli/src/options/beaconNodeOptions/chain.ts
    • Added a new CLI option --chain.flatFileStorage to enable or disable the flat file storage feature, with a default value of true.
Activity
  • The initial implementation of this feature was generated using Claude code, demonstrating an efficient approach to developing complex system components.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a major feature: flat file storage for blob sidecars and data columns, moving away from LevelDB for this data. The changes are extensive, including a detailed design document, the core implementation of the flat file store with atomic writes and a custom binary format for data columns, and integration into the existing chain logic, database layer, and network handlers. The implementation is well-structured, modular, and includes a comprehensive set of unit and integration tests. My review found one area for optimization in the existence cache rebuilding logic. Overall, this is a high-quality contribution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4865fb1d93

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +806 to +809
if (wrapper) {
return wrapper.slice(BLOB_SIDECARS_IN_WRAPPER_INDEX);
}
return null;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add LevelDB fallback when flat file lookup misses

When flatFileStore is enabled, this path returns early and never falls back to blobSidecars/blobSidecarsArchive (and the same pattern is used for data columns), so upgraded nodes with pre-existing sidecars in LevelDB but no flat-file copies will suddenly serve null for historical data. Because flatFileStorage is now default-on, this creates an upgrade regression unless a migration or fallback read path is added.

Useful? React with 👍 / 👎.

Comment on lines +29 to +30
if (blobRoots) {
for (const root of blobRoots) return root;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Resolve slot lookups to canonical root, not arbitrary root

getAnyRootForSlot() returns the first root in an in-memory set/map, which is insertion-order dependent and not canonical-aware. Finalized by-slot handlers use this helper via getBinaryBySlot() / getColumnsBinaryBySlot(), so if a slot temporarily contains both canonical and orphan files (e.g., after reorgs or restart before cleanup), the node can serve sidecars for the wrong block root.

Useful? React with 👍 / 👎.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 12, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 27d9443 Previous: 4e115ed Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 809.38 us/op 1.0599 ms/op 0.76
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 31.100 us/op 37.275 us/op 0.83
BLS verify - blst 861.35 us/op 1.0959 ms/op 0.79
BLS verifyMultipleSignatures 3 - blst 1.2615 ms/op 1.2095 ms/op 1.04
BLS verifyMultipleSignatures 8 - blst 2.1853 ms/op 1.8487 ms/op 1.18
BLS verifyMultipleSignatures 32 - blst 4.3915 ms/op 5.5028 ms/op 0.80
BLS verifyMultipleSignatures 64 - blst 8.1453 ms/op 10.514 ms/op 0.77
BLS verifyMultipleSignatures 128 - blst 15.635 ms/op 17.119 ms/op 0.91
BLS deserializing 10000 signatures 614.74 ms/op 680.29 ms/op 0.90
BLS deserializing 100000 signatures 6.1554 s/op 6.7922 s/op 0.91
BLS verifyMultipleSignatures - same message - 3 - blst 959.76 us/op 1.3436 ms/op 0.71
BLS verifyMultipleSignatures - same message - 8 - blst 1.0851 ms/op 1.6022 ms/op 0.68
BLS verifyMultipleSignatures - same message - 32 - blst 1.6504 ms/op 1.6891 ms/op 0.98
BLS verifyMultipleSignatures - same message - 64 - blst 2.4797 ms/op 2.5604 ms/op 0.97
BLS verifyMultipleSignatures - same message - 128 - blst 4.1611 ms/op 4.3054 ms/op 0.97
BLS aggregatePubkeys 32 - blst 17.548 us/op 18.959 us/op 0.93
BLS aggregatePubkeys 128 - blst 62.101 us/op 67.731 us/op 0.92
getSlashingsAndExits - default max 40.063 us/op 64.378 us/op 0.62
getSlashingsAndExits - 2k 307.14 us/op 323.84 us/op 0.95
isKnown best case - 1 super set check 405.00 ns/op 206.00 ns/op 1.97
isKnown normal case - 2 super set checks 410.00 ns/op 202.00 ns/op 2.03
isKnown worse case - 16 super set checks 401.00 ns/op 217.00 ns/op 1.85
validate api signedAggregateAndProof - struct 1.9139 ms/op 2.1545 ms/op 0.89
validate gossip signedAggregateAndProof - struct 1.9059 ms/op 2.8872 ms/op 0.66
batch validate gossip attestation - vc 640000 - chunk 32 106.30 us/op 120.33 us/op 0.88
batch validate gossip attestation - vc 640000 - chunk 64 90.555 us/op 105.28 us/op 0.86
batch validate gossip attestation - vc 640000 - chunk 128 84.752 us/op 98.699 us/op 0.86
batch validate gossip attestation - vc 640000 - chunk 256 81.264 us/op 98.257 us/op 0.83
bytes32 toHexString 497.00 ns/op 407.00 ns/op 1.22
bytes32 Buffer.toString(hex) 392.00 ns/op 256.00 ns/op 1.53
bytes32 Buffer.toString(hex) from Uint8Array 451.00 ns/op 338.00 ns/op 1.33
bytes32 Buffer.toString(hex) + 0x 388.00 ns/op 286.00 ns/op 1.36
Return object 10000 times 0.22170 ns/op 0.25380 ns/op 0.87
Throw Error 10000 times 3.2745 us/op 4.2396 us/op 0.77
toHex 100.74 ns/op 140.69 ns/op 0.72
Buffer.from 92.853 ns/op 128.93 ns/op 0.72
shared Buffer 62.457 ns/op 75.957 ns/op 0.82
fastMsgIdFn sha256 / 200 bytes 1.7230 us/op 1.8720 us/op 0.92
fastMsgIdFn h32 xxhash / 200 bytes 369.00 ns/op 205.00 ns/op 1.80
fastMsgIdFn h64 xxhash / 200 bytes 414.00 ns/op 275.00 ns/op 1.51
fastMsgIdFn sha256 / 1000 bytes 4.7900 us/op 5.9350 us/op 0.81
fastMsgIdFn h32 xxhash / 1000 bytes 467.00 ns/op 324.00 ns/op 1.44
fastMsgIdFn h64 xxhash / 1000 bytes 466.00 ns/op 349.00 ns/op 1.34
fastMsgIdFn sha256 / 10000 bytes 41.284 us/op 52.802 us/op 0.78
fastMsgIdFn h32 xxhash / 10000 bytes 1.3800 us/op 1.5380 us/op 0.90
fastMsgIdFn h64 xxhash / 10000 bytes 972.00 ns/op 1.0300 us/op 0.94
send data - 1000 256B messages 4.0841 ms/op 4.8010 ms/op 0.85
send data - 1000 512B messages 4.3978 ms/op 4.8134 ms/op 0.91
send data - 1000 1024B messages 4.7621 ms/op 5.5005 ms/op 0.87
send data - 1000 1200B messages 4.2705 ms/op 5.5348 ms/op 0.77
send data - 1000 2048B messages 4.5170 ms/op 5.7689 ms/op 0.78
send data - 1000 4096B messages 5.4481 ms/op 7.2585 ms/op 0.75
send data - 1000 16384B messages 42.381 ms/op 30.074 ms/op 1.41
send data - 1000 65536B messages 85.618 ms/op 147.77 ms/op 0.58
enrSubnets - fastDeserialize 64 bits 1.0310 us/op 970.00 ns/op 1.06
enrSubnets - ssz BitVector 64 bits 489.00 ns/op 391.00 ns/op 1.25
enrSubnets - fastDeserialize 4 bits 324.00 ns/op 150.00 ns/op 2.16
enrSubnets - ssz BitVector 4 bits 487.00 ns/op 394.00 ns/op 1.24
prioritizePeers score -10:0 att 32-0.1 sync 2-0 271.49 us/op 235.12 us/op 1.15
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 240.67 us/op 297.49 us/op 0.81
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 336.17 us/op 403.19 us/op 0.83
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 586.64 us/op 852.21 us/op 0.69
prioritizePeers score 0:0 att 64-1 sync 4-1 718.41 us/op 945.52 us/op 0.76
array of 16000 items push then shift 1.2698 us/op 1.6241 us/op 0.78
LinkedList of 16000 items push then shift 7.0980 ns/op 8.7700 ns/op 0.81
array of 16000 items push then pop 64.114 ns/op 77.673 ns/op 0.83
LinkedList of 16000 items push then pop 6.3630 ns/op 7.1790 ns/op 0.89
array of 24000 items push then shift 1.8765 us/op 2.3905 us/op 0.78
LinkedList of 24000 items push then shift 6.6150 ns/op 8.4610 ns/op 0.78
array of 24000 items push then pop 89.543 ns/op 107.63 ns/op 0.83
LinkedList of 24000 items push then pop 6.2870 ns/op 7.3380 ns/op 0.86
intersect bitArray bitLen 8 4.8770 ns/op 5.7100 ns/op 0.85
intersect array and set length 8 30.664 ns/op 33.490 ns/op 0.92
intersect bitArray bitLen 128 25.742 ns/op 28.681 ns/op 0.90
intersect array and set length 128 491.43 ns/op 547.18 ns/op 0.90
bitArray.getTrueBitIndexes() bitLen 128 1.1950 us/op 1.1670 us/op 1.02
bitArray.getTrueBitIndexes() bitLen 248 1.9130 us/op 1.9230 us/op 0.99
bitArray.getTrueBitIndexes() bitLen 512 3.6940 us/op 3.9860 us/op 0.93
Full columns - reconstruct all 6 blobs 252.00 us/op 222.34 us/op 1.13
Full columns - reconstruct half of the blobs out of 6 90.132 us/op 111.13 us/op 0.81
Full columns - reconstruct single blob out of 6 27.612 us/op 33.948 us/op 0.81
Half columns - reconstruct all 6 blobs 226.52 ms/op 278.99 ms/op 0.81
Half columns - reconstruct half of the blobs out of 6 115.33 ms/op 136.02 ms/op 0.85
Half columns - reconstruct single blob out of 6 43.522 ms/op 51.521 ms/op 0.84
Full columns - reconstruct all 10 blobs 268.73 us/op 379.93 us/op 0.71
Full columns - reconstruct half of the blobs out of 10 146.47 us/op 162.25 us/op 0.90
Full columns - reconstruct single blob out of 10 27.185 us/op 33.212 us/op 0.82
Half columns - reconstruct all 10 blobs 379.54 ms/op 453.81 ms/op 0.84
Half columns - reconstruct half of the blobs out of 10 193.34 ms/op 233.97 ms/op 0.83
Half columns - reconstruct single blob out of 10 43.379 ms/op 51.232 ms/op 0.85
Full columns - reconstruct all 20 blobs 621.58 us/op 839.73 us/op 0.74
Full columns - reconstruct half of the blobs out of 20 249.86 us/op 324.35 us/op 0.77
Full columns - reconstruct single blob out of 20 27.930 us/op 31.532 us/op 0.89
Half columns - reconstruct all 20 blobs 760.08 ms/op 894.96 ms/op 0.85
Half columns - reconstruct half of the blobs out of 20 387.20 ms/op 441.74 ms/op 0.88
Half columns - reconstruct single blob out of 20 44.007 ms/op 49.972 ms/op 0.88
Set add up to 64 items then delete first 1.5125 us/op 2.0982 us/op 0.72
OrderedSet add up to 64 items then delete first 2.3251 us/op 3.1171 us/op 0.75
Set add up to 64 items then delete last 1.7485 us/op 2.3723 us/op 0.74
OrderedSet add up to 64 items then delete last 2.7764 us/op 3.3156 us/op 0.84
Set add up to 64 items then delete middle 1.9093 us/op 2.3046 us/op 0.83
OrderedSet add up to 64 items then delete middle 4.8302 us/op 4.8311 us/op 1.00
Set add up to 128 items then delete first 3.6009 us/op 4.7536 us/op 0.76
OrderedSet add up to 128 items then delete first 6.5710 us/op 7.0783 us/op 0.93
Set add up to 128 items then delete last 3.3906 us/op 4.5687 us/op 0.74
OrderedSet add up to 128 items then delete last 5.1185 us/op 6.6880 us/op 0.77
Set add up to 128 items then delete middle 3.4206 us/op 4.5089 us/op 0.76
OrderedSet add up to 128 items then delete middle 10.631 us/op 13.026 us/op 0.82
Set add up to 256 items then delete first 6.9724 us/op 10.385 us/op 0.67
OrderedSet add up to 256 items then delete first 11.178 us/op 15.476 us/op 0.72
Set add up to 256 items then delete last 6.8066 us/op 9.2669 us/op 0.73
OrderedSet add up to 256 items then delete last 10.542 us/op 14.346 us/op 0.73
Set add up to 256 items then delete middle 6.7307 us/op 9.5011 us/op 0.71
OrderedSet add up to 256 items then delete middle 33.167 us/op 40.563 us/op 0.82
pass gossip attestations to forkchoice per slot 2.0814 ms/op 2.5246 ms/op 0.82
forkChoice updateHead vc 100000 bc 64 eq 0 370.86 us/op 502.01 us/op 0.74
forkChoice updateHead vc 600000 bc 64 eq 0 2.2237 ms/op 2.9921 ms/op 0.74
forkChoice updateHead vc 1000000 bc 64 eq 0 4.0215 ms/op 4.9829 ms/op 0.81
forkChoice updateHead vc 600000 bc 320 eq 0 2.2279 ms/op 3.0005 ms/op 0.74
forkChoice updateHead vc 600000 bc 1200 eq 0 2.2607 ms/op 3.0527 ms/op 0.74
forkChoice updateHead vc 600000 bc 7200 eq 0 2.4986 ms/op 3.4788 ms/op 0.72
forkChoice updateHead vc 600000 bc 64 eq 1000 2.3340 ms/op 3.5224 ms/op 0.66
forkChoice updateHead vc 600000 bc 64 eq 10000 2.4230 ms/op 3.6507 ms/op 0.66
forkChoice updateHead vc 600000 bc 64 eq 300000 6.4683 ms/op 9.1608 ms/op 0.71
computeDeltas 1400000 validators 0% inactive 11.880 ms/op 14.565 ms/op 0.82
computeDeltas 1400000 validators 10% inactive 12.989 ms/op 15.130 ms/op 0.86
computeDeltas 1400000 validators 20% inactive 10.043 ms/op 12.745 ms/op 0.79
computeDeltas 1400000 validators 50% inactive 7.7137 ms/op 10.007 ms/op 0.77
computeDeltas 2100000 validators 0% inactive 18.031 ms/op 21.938 ms/op 0.82
computeDeltas 2100000 validators 10% inactive 16.574 ms/op 20.566 ms/op 0.81
computeDeltas 2100000 validators 20% inactive 15.070 ms/op 19.097 ms/op 0.79
computeDeltas 2100000 validators 50% inactive 11.822 ms/op 15.021 ms/op 0.79
altair processAttestation - 250000 vs - 7PWei normalcase 1.5401 ms/op 1.9697 ms/op 0.78
altair processAttestation - 250000 vs - 7PWei worstcase 2.3648 ms/op 2.7563 ms/op 0.86
altair processAttestation - setStatus - 1/6 committees join 87.494 us/op 120.82 us/op 0.72
altair processAttestation - setStatus - 1/3 committees join 176.81 us/op 235.57 us/op 0.75
altair processAttestation - setStatus - 1/2 committees join 264.49 us/op 328.36 us/op 0.81
altair processAttestation - setStatus - 2/3 committees join 343.47 us/op 428.75 us/op 0.80
altair processAttestation - setStatus - 4/5 committees join 491.66 us/op 592.16 us/op 0.83
altair processAttestation - setStatus - 100% committees join 582.02 us/op 694.07 us/op 0.84
altair processBlock - 250000 vs - 7PWei normalcase 4.2697 ms/op 3.7448 ms/op 1.14
altair processBlock - 250000 vs - 7PWei normalcase hashState 14.248 ms/op 18.695 ms/op 0.76
altair processBlock - 250000 vs - 7PWei worstcase 24.893 ms/op 26.795 ms/op 0.93
altair processBlock - 250000 vs - 7PWei worstcase hashState 55.788 ms/op 60.156 ms/op 0.93
phase0 processBlock - 250000 vs - 7PWei normalcase 1.2497 ms/op 1.6780 ms/op 0.74
phase0 processBlock - 250000 vs - 7PWei worstcase 20.809 ms/op 21.075 ms/op 0.99
altair processEth1Data - 250000 vs - 7PWei normalcase 278.14 us/op 388.94 us/op 0.72
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 5.4310 us/op 7.1720 us/op 0.76
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 34.394 us/op 56.095 us/op 0.61
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 10.060 us/op 18.121 us/op 0.56
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 7.8740 us/op 10.577 us/op 0.74
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 160.28 us/op 170.90 us/op 0.94
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.3981 ms/op 2.8891 ms/op 0.48
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.8044 ms/op 2.3663 ms/op 0.76
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 1.7641 ms/op 2.8588 ms/op 0.62
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 4.9658 ms/op 4.5633 ms/op 1.09
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.0452 ms/op 2.6466 ms/op 0.77
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.3092 ms/op 4.9716 ms/op 0.87
Tree 40 250000 create 330.69 ms/op 377.33 ms/op 0.88
Tree 40 250000 get(125000) 96.132 ns/op 130.55 ns/op 0.74
Tree 40 250000 set(125000) 1.0052 us/op 1.2826 us/op 0.78
Tree 40 250000 toArray() 16.111 ms/op 16.734 ms/op 0.96
Tree 40 250000 iterate all - toArray() + loop 16.801 ms/op 13.716 ms/op 1.22
Tree 40 250000 iterate all - get(i) 38.693 ms/op 44.332 ms/op 0.87
Array 250000 create 2.1813 ms/op 2.4627 ms/op 0.89
Array 250000 clone - spread 659.11 us/op 817.99 us/op 0.81
Array 250000 get(125000) 0.49300 ns/op 0.35200 ns/op 1.40
Array 250000 set(125000) 0.49600 ns/op 0.49200 ns/op 1.01
Array 250000 iterate all - loop 56.327 us/op 61.488 us/op 0.92
phase0 afterProcessEpoch - 250000 vs - 7PWei 38.245 ms/op 41.789 ms/op 0.92
Array.fill - length 1000000 2.0380 ms/op 2.8843 ms/op 0.71
Array push - length 1000000 8.5809 ms/op 13.081 ms/op 0.66
Array.get 0.19677 ns/op 0.21946 ns/op 0.90
Uint8Array.get 0.20350 ns/op 0.22265 ns/op 0.91
phase0 beforeProcessEpoch - 250000 vs - 7PWei 12.845 ms/op 14.858 ms/op 0.86
altair processEpoch - mainnet_e81889 269.12 ms/op 270.78 ms/op 0.99
mainnet_e81889 - altair beforeProcessEpoch 13.992 ms/op 22.804 ms/op 0.61
mainnet_e81889 - altair processJustificationAndFinalization 5.8550 us/op 6.3160 us/op 0.93
mainnet_e81889 - altair processInactivityUpdates 3.0845 ms/op 3.7754 ms/op 0.82
mainnet_e81889 - altair processRewardsAndPenalties 14.952 ms/op 19.181 ms/op 0.78
mainnet_e81889 - altair processRegistryUpdates 788.00 ns/op 954.00 ns/op 0.83
mainnet_e81889 - altair processSlashings 358.00 ns/op 183.00 ns/op 1.96
mainnet_e81889 - altair processEth1DataReset 345.00 ns/op 179.00 ns/op 1.93
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.3638 ms/op 4.7959 ms/op 0.28
mainnet_e81889 - altair processSlashingsReset 973.00 ns/op 872.00 ns/op 1.12
mainnet_e81889 - altair processRandaoMixesReset 1.4360 us/op 1.5520 us/op 0.93
mainnet_e81889 - altair processHistoricalRootsUpdate 351.00 ns/op 156.00 ns/op 2.25
mainnet_e81889 - altair processParticipationFlagUpdates 758.00 ns/op 530.00 ns/op 1.43
mainnet_e81889 - altair processSyncCommitteeUpdates 316.00 ns/op 150.00 ns/op 2.11
mainnet_e81889 - altair afterProcessEpoch 38.290 ms/op 44.256 ms/op 0.87
capella processEpoch - mainnet_e217614 759.21 ms/op 797.83 ms/op 0.95
mainnet_e217614 - capella beforeProcessEpoch 50.605 ms/op 63.475 ms/op 0.80
mainnet_e217614 - capella processJustificationAndFinalization 5.3380 us/op 5.6840 us/op 0.94
mainnet_e217614 - capella processInactivityUpdates 11.199 ms/op 17.462 ms/op 0.64
mainnet_e217614 - capella processRewardsAndPenalties 106.17 ms/op 104.69 ms/op 1.01
mainnet_e217614 - capella processRegistryUpdates 4.9370 us/op 5.8310 us/op 0.85
mainnet_e217614 - capella processSlashings 355.00 ns/op 171.00 ns/op 2.08
mainnet_e217614 - capella processEth1DataReset 346.00 ns/op 188.00 ns/op 1.84
mainnet_e217614 - capella processEffectiveBalanceUpdates 5.2401 ms/op 15.161 ms/op 0.35
mainnet_e217614 - capella processSlashingsReset 955.00 ns/op 850.00 ns/op 1.12
mainnet_e217614 - capella processRandaoMixesReset 1.1790 us/op 1.2150 us/op 0.97
mainnet_e217614 - capella processHistoricalRootsUpdate 458.00 ns/op 182.00 ns/op 2.52
mainnet_e217614 - capella processParticipationFlagUpdates 669.00 ns/op 552.00 ns/op 1.21
mainnet_e217614 - capella afterProcessEpoch 104.01 ms/op 117.67 ms/op 0.88
phase0 processEpoch - mainnet_e58758 234.72 ms/op 244.77 ms/op 0.96
mainnet_e58758 - phase0 beforeProcessEpoch 41.228 ms/op 55.767 ms/op 0.74
mainnet_e58758 - phase0 processJustificationAndFinalization 5.2180 us/op 6.1270 us/op 0.85
mainnet_e58758 - phase0 processRewardsAndPenalties 15.439 ms/op 19.805 ms/op 0.78
mainnet_e58758 - phase0 processRegistryUpdates 2.4540 us/op 3.2110 us/op 0.76
mainnet_e58758 - phase0 processSlashings 360.00 ns/op 187.00 ns/op 1.93
mainnet_e58758 - phase0 processEth1DataReset 369.00 ns/op 190.00 ns/op 1.94
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 761.70 us/op 960.65 us/op 0.79
mainnet_e58758 - phase0 processSlashingsReset 1.0710 us/op 1.0290 us/op 1.04
mainnet_e58758 - phase0 processRandaoMixesReset 1.3830 us/op 1.1220 us/op 1.23
mainnet_e58758 - phase0 processHistoricalRootsUpdate 466.00 ns/op 196.00 ns/op 2.38
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.1310 us/op 950.00 ns/op 1.19
mainnet_e58758 - phase0 afterProcessEpoch 32.130 ms/op 37.115 ms/op 0.87
phase0 processEffectiveBalanceUpdates - 250000 normalcase 977.20 us/op 2.2270 ms/op 0.44
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.1204 ms/op 2.0591 ms/op 0.54
altair processInactivityUpdates - 250000 normalcase 10.093 ms/op 17.327 ms/op 0.58
altair processInactivityUpdates - 250000 worstcase 10.086 ms/op 16.585 ms/op 0.61
phase0 processRegistryUpdates - 250000 normalcase 2.3360 us/op 5.1970 us/op 0.45
phase0 processRegistryUpdates - 250000 badcase_full_deposits 240.73 us/op 298.51 us/op 0.81
phase0 processRegistryUpdates - 250000 worstcase 0.5 62.790 ms/op 72.077 ms/op 0.87
altair processRewardsAndPenalties - 250000 normalcase 13.577 ms/op 21.315 ms/op 0.64
altair processRewardsAndPenalties - 250000 worstcase 13.108 ms/op 21.317 ms/op 0.61
phase0 getAttestationDeltas - 250000 normalcase 4.4028 ms/op 7.4706 ms/op 0.59
phase0 getAttestationDeltas - 250000 worstcase 4.4431 ms/op 7.4144 ms/op 0.60
phase0 processSlashings - 250000 worstcase 89.322 us/op 109.15 us/op 0.82
altair processSyncCommitteeUpdates - 250000 9.6719 ms/op 12.700 ms/op 0.76
BeaconState.hashTreeRoot - No change 395.00 ns/op 206.00 ns/op 1.92
BeaconState.hashTreeRoot - 1 full validator 75.507 us/op 94.095 us/op 0.80
BeaconState.hashTreeRoot - 32 full validator 857.98 us/op 1.0871 ms/op 0.79
BeaconState.hashTreeRoot - 512 full validator 6.2912 ms/op 10.195 ms/op 0.62
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 99.394 us/op 122.04 us/op 0.81
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.2583 ms/op 2.3511 ms/op 0.54
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 17.047 ms/op 21.851 ms/op 0.78
BeaconState.hashTreeRoot - 1 balances 78.312 us/op 83.792 us/op 0.93
BeaconState.hashTreeRoot - 32 balances 693.97 us/op 1.2783 ms/op 0.54
BeaconState.hashTreeRoot - 512 balances 4.9397 ms/op 7.4413 ms/op 0.66
BeaconState.hashTreeRoot - 250000 balances 150.24 ms/op 173.69 ms/op 0.86
aggregationBits - 2048 els - zipIndexesInBitList 18.322 us/op 22.844 us/op 0.80
regular array get 100000 times 21.443 us/op 25.721 us/op 0.83
wrappedArray get 100000 times 21.440 us/op 25.471 us/op 0.84
arrayWithProxy get 100000 times 9.3695 ms/op 15.295 ms/op 0.61
ssz.Root.equals 20.030 ns/op 24.158 ns/op 0.83
byteArrayEquals 19.845 ns/op 24.639 ns/op 0.81
Buffer.compare 8.4650 ns/op 10.144 ns/op 0.83
processSlot - 1 slots 9.5620 us/op 11.875 us/op 0.81
processSlot - 32 slots 2.3870 ms/op 2.4259 ms/op 0.98
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 3.6194 ms/op 6.1976 ms/op 0.58
getCommitteeAssignments - req 1 vs - 250000 vc 1.6468 ms/op 1.9308 ms/op 0.85
getCommitteeAssignments - req 100 vs - 250000 vc 3.3280 ms/op 3.8945 ms/op 0.85
getCommitteeAssignments - req 1000 vs - 250000 vc 3.5843 ms/op 4.0324 ms/op 0.89
findModifiedValidators - 10000 modified validators 584.26 ms/op 684.75 ms/op 0.85
findModifiedValidators - 1000 modified validators 478.34 ms/op 471.99 ms/op 1.01
findModifiedValidators - 100 modified validators 290.80 ms/op 325.37 ms/op 0.89
findModifiedValidators - 10 modified validators 157.09 ms/op 193.75 ms/op 0.81
findModifiedValidators - 1 modified validators 153.09 ms/op 168.86 ms/op 0.91
findModifiedValidators - no difference 158.88 ms/op 207.71 ms/op 0.76
migrate state 1500000 validators, 3400 modified, 2000 new 936.98 ms/op 1.0587 s/op 0.88
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 5.6700 ns/op 4.8600 ns/op 1.17
state getBlockRootAtSlot - 250000 vs - 7PWei 384.74 ns/op 521.15 ns/op 0.74
computeProposerIndex 100000 validators 1.3060 ms/op 1.5621 ms/op 0.84
getNextSyncCommitteeIndices 1000 validators 97.784 ms/op 131.98 ms/op 0.74
getNextSyncCommitteeIndices 10000 validators 97.689 ms/op 134.35 ms/op 0.73
getNextSyncCommitteeIndices 100000 validators 92.480 ms/op 135.85 ms/op 0.68
computeProposers - vc 250000 516.29 us/op 638.52 us/op 0.81
computeEpochShuffling - vc 250000 36.619 ms/op 43.278 ms/op 0.85
getNextSyncCommittee - vc 250000 8.8520 ms/op 11.614 ms/op 0.76
nodejs block root to RootHex using toHex 95.817 ns/op 143.99 ns/op 0.67
nodejs block root to RootHex using toRootHex 62.567 ns/op 91.741 ns/op 0.68
nodejs fromHex(blob) 212.55 us/op 518.73 us/op 0.41
nodejs fromHexInto(blob) 575.42 us/op 739.15 us/op 0.78
nodejs block root to RootHex using the deprecated toHexString 459.67 ns/op 579.70 ns/op 0.79
nodejs byteArrayEquals 32 bytes (block root) 24.299 ns/op 29.866 ns/op 0.81
nodejs byteArrayEquals 48 bytes (pubkey) 34.906 ns/op 42.441 ns/op 0.82
nodejs byteArrayEquals 96 bytes (signature) 30.715 ns/op 42.188 ns/op 0.73
nodejs byteArrayEquals 1024 bytes 36.078 ns/op 48.235 ns/op 0.75
nodejs byteArrayEquals 131072 bytes (blob) 1.6182 us/op 1.9755 us/op 0.82
browser block root to RootHex using toHex 131.37 ns/op 170.25 ns/op 0.77
browser block root to RootHex using toRootHex 119.93 ns/op 161.01 ns/op 0.74
browser fromHex(blob) 993.72 us/op 1.2244 ms/op 0.81
browser fromHexInto(blob) 660.93 us/op 726.26 us/op 0.91
browser block root to RootHex using the deprecated toHexString 525.72 ns/op 610.66 ns/op 0.86
browser byteArrayEquals 32 bytes (block root) 27.773 ns/op 32.804 ns/op 0.85
browser byteArrayEquals 48 bytes (pubkey) 39.009 ns/op 45.369 ns/op 0.86
browser byteArrayEquals 96 bytes (signature) 72.819 ns/op 89.411 ns/op 0.81
browser byteArrayEquals 1024 bytes 739.85 ns/op 840.21 ns/op 0.88
browser byteArrayEquals 131072 bytes (blob) 92.982 us/op 105.65 us/op 0.88

by benchmarkbot/action

Copy link
Contributor

@twoeths twoeths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with the current approach, we store all DataColumnSidecars per block
this is against #8114 so we need to reconsider should we store DataColumnSidecar separately vs store all DataColumnSidecars per file like in this approach

  • Store all DataColumnSidecars per block
    • pros:
      • archive fast
      • almost never run out of file handles
    • cons:
      • to read a DataColumnSidecar, we need to load the whole block file
      • it writes slower per block (sequentially), may affect range sync
  • Store DataColumnSidecar separately, under the same root hex/slot
    • pros:

      • read each DataColumnSidecar separately, compliant to the current approach
      • write DataColumnSidecars in parallel
    • cons:

      • archive a bit slower because we need to prune the whole folder. But it should still be way faster than leveldb?
      • when peers ask for a lot of DataColumnSidecars we may run out of file handles (not sure how often it is by @nflaig said it's used to be an issue in Prysm)

it seems to me we focus on the "archiveBlocks" too much on the current approach while not on read/write flow where it happens very frequently, I'm curious what others think? I tend to be in favor of storing it separately to be compliant to the current approach

mitigation for the current approach would be not to load the whole file every time, parse header first
mitigation for storing each DataColumnSidecar separately is to put the "getColumn()" operation in a job queue if too many peers request us

@nflaig
Copy link
Member

nflaig commented Feb 25, 2026

Store all DataColumnSidecars per block

leaning towards this and mitigate cons by implementing fd.read() with offset/length

why are the writes slower in that case? sure it's one large file but it's only one vs. 128 files with the other approach per block (for supernodes)

wemeetagain and others added 5 commits February 25, 2026 12:42
Each column is independently Snappy block-compressed with an offset
table for O(1) random access. Targeted fd.read() with position offsets
reads only header + offset table + requested columns instead of the
entire file.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@twoeths
Copy link
Contributor

twoeths commented Feb 26, 2026

Store all DataColumnSidecars per block

leaning towards this and mitigate cons by implementing fd.read() with offset/length

why are the writes slower in that case? sure it's one large file but it's only one vs. 128 files with the other approach per block (for supernodes)

I read the code wrong to write each column one by one
for each block we actually write in batch here

as also clarified offline, we'll go with storing one data columns file per block


const hi = view.getUint32(SLOT_OFFSET, false);
const lo = view.getUint32(SLOT_OFFSET + 4, false);
const slot = hi * 0x100000000 + lo;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we always read slot as 4 bytes in sszBYtes.ts, maybe make it consistent everywhere?

/**
 * Read only the first 4 bytes of Slot, max value is 4,294,967,295 will be reached 1634 years after genesis
 *
 * If the high bytes are not zero, return null
 */
function getSlotFromOffset(data: Uint8Array, offset: number): Slot | null {
  return checkSlotHighBytes(data, offset) ? getSlotFromOffsetTrusted(data, offset) : null;
}

wemeetagain and others added 2 commits March 10, 2026 10:30
Address PR review feedback:
- Replace inline (N+1)*4 with offsetTableSize(N) in readColumn,
  readAllColumns, getColumnByteRange, and encodeDcolFile
- Document readColumn as a test/merge convenience (production uses
  getColumnByteRange + fd.read for targeted I/O)
- Add comment explaining popcount semantics in getColumnByteRange

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e-storage

Resolve conflicts:
- writeBlockInputToDb.ts: take upstream's refactored structure
  (writeBlockAndBlobsToDb + writeDataColumnsToDb), re-add flat file
  store paths for both blobs and columns
- blobSidecarsByRange.ts, dataColumnSidecarsByRange.ts: take
  upstream's getHead() + payloadStatus API, keep flat file store paths
- blockArchiver.test.ts: add root + payloadStatus to checkpoint
- flatFileStoreHandlers.test.ts: update mock forkChoice to use
  getHead() instead of getHeadRoot()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants