-
Notifications
You must be signed in to change notification settings - Fork 16
Description
Problem
The node currently hardcodes a single worker per validator (worker_id = 0). The EpochManager, network layer, and several initialization paths assume exactly one worker. This blocks the ability to run independent fee markets, specialized transaction pools, or any form of worker-level parallelism.
Goal
Refactor the node to support N independent workers per validator. Each worker operates as a standalone unit with its own:
- libp2p swarm (dedicated gossip topics, listen address, network key)
- RPC server (unique port)
- Transaction pool
- Batch builder + batch validator
LocalNetworkinstance for primary communication
Workers share only the Primary (consensus) and the execution engine (block production). The num_workers count is a consensus-level parameter — all validators must agree on it.
Why
The immediate motivation is multiple fee markets. Once multi-worker is in place, a follow-up (Phase 2) spawns 2 workers by default:
- Worker 0 (General): accepts all transactions, standard EIP-1559 fee market
- Worker 1 (Whitelisted Transfers): accepts only whitelisted ERC-20
transfer/transferFromcalls, operates with a reduced base fee
This architecture also enables future process separation — workers can be extracted into standalone processes communicating with the primary over RPC.
Design Constraints
- Workers are fully independent — no cross-worker shared state. Each worker has its own network identity, pool, and gossip topics.
- Per-worker gossip topics —
tn-worker-{id}andtn-txn-{id}replace the current globaltn-workerandtn-txntopics. This provides network-level isolation. - Per-worker
LocalNetwork— each worker gets its ownLocalNetworkinstance for primary communication. The primary registers as the handler on every worker'sLocalNetwork. This is the seam for future process separation. num_workersis a consensus parameter — changing it requires a coordinated upgrade across all validators. Defaults to1for backward compatibility.- Execution engine is shared — batches from all workers are processed sequentially by the same engine. Worker ID is already encoded in the block
difficultyfield. - Faucet on worker 0 only — the testnet faucet attaches to the general-purpose worker.
Current State
Much of the infrastructure already supports N workers but is only called with worker_id = 0:
ExecutionNodeInner.workers: Vec<WorkerComponents>— vec exists, only 1 elementGasAccumulator— supports N workers internally, but initialized withnew(1)BatchValidator— already storesworker_idand rejects mismatched batchesadjust_base_fees()— loops overnum_workers()but is a no-op- Block
difficultyfield — already encodesbatch_index << 16 | worker_id
Hardcoded locations that block multi-worker:
| Location | Current | Fix |
|---|---|---|
manager.rs spawn_worker_node_components() |
let worker_id = 0; |
Loop over 0..num_workers |
manager.rs GasAccumulator::new(1) |
Hardcoded 1 worker | Use num_workers |
manager.rs catchup_accumulator() |
gas_accumulator.base_fee(0) |
Restore per-worker base fees |
manager.rs EpochManager struct |
Singular worker_network_handle |
Vec<WorkerNetworkHandle> |
manager.rs create_consensus() |
Returns (PrimaryNode, WorkerNode) |
Returns (PrimaryNode, Vec<WorkerNode>) |
config/genesis.rs NodeP2pInfo |
Single worker: NetworkInfo |
workers: Vec<NetworkInfo> |
config/node.rs Parameters |
No num_workers field |
Add num_workers: u16 (default 1) |
config/network.rs |
Global topics tn-worker, tn-txn |
Per-worker tn-worker-{id}, tn-txn-{id} |
config/consensus.rs |
Single LocalNetwork |
Vec<LocalNetwork> |
This PR: Per-Worker LocalNetwork
Each worker gets its own LocalNetwork instance for communicating with the primary. This is the seam for future process separation — today these are in-process channels, but they can be replaced with remote RPC clients without changing the worker or primary code. With num_workers = 1, there is exactly one LocalNetwork and behavior is identical to today.
Scope
crates/config/src/consensus.rs — ConsensusConfig:
- Change the
LocalNetworkfield from a single instance tolocal_networks: Vec<LocalNetwork>(indexed byWorkerId) - Add
local_network(worker_id: WorkerId) -> &LocalNetworkaccessor
crates/network-types/src/local.rs — LocalNetwork:
- No structural changes needed — each instance is already self-contained
- Each worker creates its own
LocalNetworkand registers its own handlers - The primary-side handler (
WorkerReceiverHandler) acceptsWorkerOwnBatchMessagewhich already containsworker_id
crates/consensus/primary/src/ — Primary receiver:
WorkerReceiverHandleralready handlesWorkerOwnBatchMessagewithworker_id— no handler change needed- The primary must register as the
worker_to_primary_handleron each worker'sLocalNetworkinstance
crates/consensus/worker/src/worker.rs — Worker:
- Currently receives
LocalNetworkfromConsensusConfig - Change to receive its specific
LocalNetworkinstance byworker_idindex
blocked by #554