diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 869326a8..66cf9a2e 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -57,3 +57,23 @@ jobs: - name: Run tests via tox run: uvx tox -e pytest + + fill-tests: + name: Fill test fixtures - Python 3.14 + runs-on: ubuntu-latest + steps: + - name: Checkout leanSpec + uses: actions/checkout@v4 + + - name: Install uv and Python 3.14 + uses: astral-sh/setup-uv@v4 + with: + enable-cache: true + cache-dependency-glob: "pyproject.toml" + python-version: "3.14" + + - name: Sync dependencies + run: uv sync --all-packages --no-progress + + - name: Fill test fixtures + run: uv run fill --fork=Devnet --clean diff --git a/CLAUDE.md b/CLAUDE.md index 5966a23d..48832f7e 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -17,89 +17,96 @@ subspecifications that the Lean Ethereum protocol relies on. ### Running Tests ```bash -# Install and sync project and dev dependencies -uv sync - -# Run all tests -uv run pytest - -# Run tests with coverage -uv run pytest --cov=src/lean_spec --cov-report=html +uv sync --all-packages # Install dependencies +uv run pytest # Run unit tests +uv run fill --fork=devnet --clean # Generate test vectors +# Note: execution layer support is planned for future, infrastructure is ready +# for now, `--layer=consensus` is default and the only value used. ``` -### Code Quality Checks +### Code Quality ```bash -# Format code -uv run ruff format src tests - -# Check linting -uv run ruff check src tests - -# Fix fixable linting errors -uv run ruff check --fix src tests - -# Type checking -uv run mypy src tests - -# Run all quality checks (lint, typecheck, spellcheck) -uvx tox -e all-checks - -# Run everything (all checks + tests + docs) -uvx tox +uv run ruff format src tests # Format code +uv run ruff check --fix src tests packages # Lint and fix +uvx tox -e typecheck # Type check +uvx tox -e all-checks # All quality checks +uvx tox # Everything (checks + tests + docs) ``` ### Common Tasks +- **Main specs**: `src/lean_spec/` +- **Subspecs**: `src/lean_spec/subspecs/{subspec}/` +- **Unit tests**: `tests/lean_spec/` (mirrors source structure) +- **Consensus spec tests**: `tests/consensus/` (generates test vectors) +- **Execution spec tests**: `tests/execution/` (future - infrastructure ready) -1. **Adding to main specs**: Located in `src/lean_spec/` -2. **Adding to subspecs**: Located in `src/lean_spec/subspecs/` - - Create a new subdirectory for each subspec (e.g., `src/lean_spec/subspecs/poseidon2/`) - - Tests for subspecs should be in `tests/subspecs/{subspec}/`, mirroring the source structure +## Code Style +- Line length: 100 characters, type hints everywhere +- Google docstring style (no docstrings for `__init__`) +- Test files/functions must start with `test_` -## Important Patterns +## Test Framework Structure -### Test Patterns -- Tests should be placed in `tests/` and follow the same structure as the source code. -- Use `pytest.fixture`, in `conftest.py` or test files, for reusable test setup. -- Use `pytest.mark.parametrize` to parametrize tests with multiple inputs -- Use `pytest.raises(...)` with specific exceptions to test error cases -- Use `@pytest.mark.slow` for long-running tests +**Two types of tests:** -## Code Style +1. **Unit tests** (`tests/lean_spec/`) - Standard pytest tests for implementation +2. **Spec tests** (`tests/consensus/`) - Generate JSON test vectors via fillers + - *Note: `tests/execution/` infrastructure is ready for future execution layer work* + +**Test Filling Framework:** +- Layer-agnostic pytest plugin in `packages/testing/src/framework/pytest_plugins/filler.py` +- Layer-specific packages: `consensus_testing` (active) and `execution_testing` (future) +- Write consensus spec tests using `state_transition_test` or `fork_choice_test` fixtures +- These fixtures are type aliases that create test vectors when called +- Run `uv run fill --fork=Devnet --clean` to generate consensus fixtures +- Use `--layer=execution` flag when execution layer is implemented +- Output goes to `fixtures/{layer}/{format}/{test_path}/...` + +**Example spec test:** +```python +def test_block(state_transition_test: StateTransitionTestFiller) -> None: + state_transition_test( + pre=genesis_state, + blocks=[block], + post=StateExpectation(slot=Slot(1)) # Only check what matters + ) +``` -- Line length: 79 characters -- Use type hints everywhere -- Follow Google docstring style -- No docstrings needed for `__init__` methods -- Imports are automatically sorted by `isort` and `ruff` - -## Testing Philosophy - -- Tests should be simple and clear -- Test file names must start with `test_` -- Test function names must start with `test_` -- Use descriptive test names that explain what's being tested - -## Common Commands Reference - -| Task | Command | -|-----------------------------------------------|----------------------------------| -| Install and sync project and dev dependencies | `uv sync` | -| Run tests | `uv run pytest` | -| Format code | `uv run ruff format src tests` | -| Lint code | `uv run ruff check src tests` | -| Fix lint errors | `uv run ruff check --fix src tests` | -| Type check | `uv run mypy src tests` | -| Build docs | `uv run mkdocs build` | -| Serve docs | `uv run mkdocs serve` | -| Run all quality checks (no tests/docs) | `uvx tox -e all-checks` | -| Run everything (checks + tests + docs) | `uvx tox` | +**How it works:** +1. Test function receives a fixture class (not instance) as parameter +2. Calling it creates a `FixtureWrapper` that runs `make_fixture()` +3. `make_fixture()` executes the spec code (state transitions, fork choice steps) +4. Validates output against expectations (`StateExpectation`, `StoreChecks`) +5. Serializes to JSON via Pydantic's `model_dump(mode="json")` +6. Writes fixtures at session end to `fixtures/{layer}/{format}/{test_path}/...` + +**Layer-specific architecture:** +- `framework/` - Shared infrastructure (base classes, pytest plugin, CLI) +- `consensus_testing/` - Consensus layer fixtures, forks, builders +- `execution_testing/` - Execution layer fixtures, forks, builders +- Regular pytest runs (`uv run pytest`) ignore spec tests - they only run via `fill` command + +**Serialization requirements:** +- All spec types (State, Block, Uint64, etc.) must be Pydantic models +- Custom types need `@field_serializer` or `model_serializer` for JSON output +- SSZ types typically serialize to hex strings (e.g., `"0x1234..."`) +- Fixture models inherit from layer-specific base classes: + - Consensus: `BaseConsensusFixture` (in `consensus_testing/test_fixtures/base.py`) + - Execution: `BaseExecutionFixture` (in `execution_testing/test_fixtures/base.py`) + - Both use `CamelModel` for camelCase JSON output +- Test the serialization: `fixture.model_dump(mode="json")` must produce valid JSON + +**Key fixture types:** +- `StateTransitionTest` - Tests state transitions with blocks +- `ForkChoiceTest` - Tests fork choice with steps (tick/block/attestation) +- Selective validation via `StateExpectation` and `StoreChecks` (only validates fields you specify) ## Important Notes -1. This repository uses Python 3.12+ features -2. All models should use Pydantic for automatic validation. -3. Keep things simple, readable, and clear. These are meant to be clear specifications. -4. The repository is `leanSpec` not `lean-spec`. +- Python 3.12+ required +- Use Pydantic models for validation +- Keep specs simple, readable, and clear +- Repository is `leanSpec` not `lean-spec` ## SSZ Type Design Patterns diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d04d50d2..793b5656 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -3,10 +3,10 @@ ## Quick Start 1. Fork and clone the repository -2. Install dependencies: `uv sync` +2. Install dependencies: `uv sync --all-packages` 3. Make your changes 4. Run checks: `uvx tox -e all-checks` -5. Run tests: `uv run pytest` +5. Run tests: `uvx pytest` 6. Submit a pull request ## Pull Request Guidelines diff --git a/README.md b/README.md index 51f50b35..14cd12f4 100644 --- a/README.md +++ b/README.md @@ -177,17 +177,16 @@ def test_withdrawal_amount_above_uint64_max(): | Task | Command | |-----------------------------------------------|------------------------------------| -| Install and sync project and dev dependencies | `uv sync` | -| Run tests | `uv run pytest` | +| Install and sync project and dev dependencies | `uv sync --all-packages` | +| Run tests | `uv run pytest ...` | | Format code | `uv run ruff format src tests` | | Lint code | `uv run ruff check src tests` | | Fix lint errors | `uv run ruff check --fix src tests` | | Type check | `uv run mypy src tests` | | Build docs | `uv run mkdocs build` | | Serve docs | `uv run mkdocs serve` | -| Run all quality checks (no tests/docs) | `uvx tox -e all-checks` | | Run everything (checks + tests + docs) | `uvx tox` | -| Run specific tox environment | `uvx tox -e lint` | +| Run all quality checks (no tests/docs) | `uvx tox -e all-checks` | ## Contributing diff --git a/packages/testing/README.md b/packages/testing/README.md new file mode 100644 index 00000000..21c87631 --- /dev/null +++ b/packages/testing/README.md @@ -0,0 +1,29 @@ +# Lean Ethereum Specification Testing Framework + +Testing framework for generating and running Lean Ethereum specification tests. + +This package provides tools for generating consensus test fixtures, including: +- Pytest plugins for fixture generation +- Base fixture types and serialization +- CLI tools for test management + +## Installation + +This package is part of the lean-spec workspace and is automatically installed when you +sync the parent project with `--all-packages`. + +```bash +# from `leanSpec/` (root of workspace) +uv sync --all-packages +``` + +## Usage + +Generate test fixtures using the `fill` command: + +```bash +# from `leanSpec/` (root of workspace) +uv run fill --clean --fork=devnet +``` + +See the main project documentation for more details. diff --git a/packages/testing/pyproject.toml b/packages/testing/pyproject.toml new file mode 100644 index 00000000..b9aadf1b --- /dev/null +++ b/packages/testing/pyproject.toml @@ -0,0 +1,47 @@ +[build-system] +requires = ["setuptools>=77.0.3", "wheel"] +build-backend = "setuptools.build_meta" + +[project] +name = "lean-ethereum-testing" +version = "0.0.1" +description = "Lean Ethereum client test generation and runner framework" +readme = "README.md" +authors = [ + { name = "Ethereum Foundation", email = "thomas.coratger@ethereum.org" }, +] +keywords = ["ethereum", "testing", "consensus", "lean"] +classifiers = [ + "License :: OSI Approved :: MIT License", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", + "Programming Language :: Python :: 3.14", +] +requires-python = ">=3.12" +dependencies = [ + "lean-spec", + "pydantic>=2.12.0,<3", + "pytest>=8.3.3,<9", + "click>=8.1.0,<9", +] + +license = {text = "MIT"} + +[project.optional-dependencies] +test = ["pytest-cov>=6.0.0,<7"] +lint = ["ruff>=0.11.8,<1", "mypy>=1.15.0,<1.16"] + +[project.urls] +Homepage = "https://github.com/leanEthereum/lean-spec" +Source = "https://github.com/leanEthereum/lean-spec" +Issues = "https://github.com/leanEthereum/lean-spec/issues" + +[project.scripts] +fill = "framework.cli.fill:fill" + +[tool.setuptools.packages.find] +where = ["src"] + +[tool.uv.sources] +lean-spec = { workspace = true } diff --git a/packages/testing/src/consensus_testing/__init__.py b/packages/testing/src/consensus_testing/__init__.py new file mode 100644 index 00000000..818aaacb --- /dev/null +++ b/packages/testing/src/consensus_testing/__init__.py @@ -0,0 +1,50 @@ +"""Test tools for generating and consuming leanSpec consensus test vectors.""" + +from typing import Type + +from framework.base_types import CamelModel + +from . import forks +from .block_spec import BlockSpec +from .genesis import generate_pre_state +from .test_fixtures import ( + BaseConsensusFixture, + ForkChoiceTest, + StateTransitionTest, +) +from .test_types import ( + AttestationStep, + BaseForkChoiceStep, + BlockStep, + ForkChoiceStep, + StateExpectation, + StoreChecks, + TickStep, +) + +StateTransitionTestFiller = Type[StateTransitionTest] +ForkChoiceTestFiller = Type[ForkChoiceTest] + +__all__ = [ + # Public API + "BlockSpec", + "forks", + "generate_pre_state", + # Base types + "CamelModel", + # Fixture classes + "BaseConsensusFixture", + "StateTransitionTest", + "ForkChoiceTest", + # Test types + "BaseForkChoiceStep", + "TickStep", + "BlockStep", + "AttestationStep", + "ForkChoiceStep", + "StateExpectation", + "StoreChecks", + # Type aliases for test function signatures + "StateTransitionTestFiller", + "ForkChoiceTestFiller", +] diff --git a/packages/testing/src/consensus_testing/block_spec.py b/packages/testing/src/consensus_testing/block_spec.py new file mode 100644 index 00000000..4f52717a --- /dev/null +++ b/packages/testing/src/consensus_testing/block_spec.py @@ -0,0 +1,56 @@ +"""Lightweight block specification for test definitions.""" + +from pydantic import BaseModel + +from lean_spec.subspecs.containers.block import BlockBody +from lean_spec.subspecs.containers.slot import Slot +from lean_spec.types import Bytes32, ValidatorIndex + + +class BlockSpec(BaseModel): + """ + Block specification for test definitions. + + Contains the same fields as Block, but all optional except slot. + The framework fills in any missing fields automatically. + + This matches the pattern from execution-specs where Block(...) is a spec + that the framework builds into a full block. + + Usage: + - Simple: BlockSpec(slot=Slot(1)) - framework computes everything + - Custom: BlockSpec(slot=Slot(1), proposer_index=ValidatorIndex(5)) - override specific fields + - Invalid: BlockSpec(slot=Slot(1), state_root=Bytes32.zero()) - test invalid blocks + """ + + slot: Slot + """The slot for this block (required).""" + + proposer_index: ValidatorIndex | None = None + """ + The proposer index for this block. + + If None, framework selects using round-robin based on slot and num_validators. + """ + + parent_root: Bytes32 | None = None + """ + The root of the parent block. + + If None, framework computes from state.latest_block_header. + """ + + state_root: Bytes32 | None = None + """ + The state root after applying this block. + + If None, framework computes via state_transition dry-run. + """ + + body: BlockBody | None = None + """ + The block body containing attestations. + + If None, framework creates empty body for state transition tests, + or collects attestations for fork choice tests. + """ diff --git a/packages/testing/src/consensus_testing/forks/__init__.py b/packages/testing/src/consensus_testing/forks/__init__.py new file mode 100644 index 00000000..e49ba2db --- /dev/null +++ b/packages/testing/src/consensus_testing/forks/__init__.py @@ -0,0 +1,28 @@ +"""Fork definitions for consensus layer testing.""" + +from typing import Type + +from framework.forks import BaseFork, BaseForkMeta + +from .forks import Devnet +from .helpers import ( + ALL_FORKS, + get_fork_by_name, + get_forks, + get_forks_with_no_parents, + get_from_until_fork_set, +) + +Fork = Type[BaseFork] + +__all__ = [ + "ALL_FORKS", + "BaseFork", + "BaseForkMeta", + "Devnet", + "Fork", + "get_fork_by_name", + "get_forks", + "get_forks_with_no_parents", + "get_from_until_fork_set", +] diff --git a/packages/testing/src/consensus_testing/forks/forks.py b/packages/testing/src/consensus_testing/forks/forks.py new file mode 100644 index 00000000..40a2122c --- /dev/null +++ b/packages/testing/src/consensus_testing/forks/forks.py @@ -0,0 +1,16 @@ +"""Devnet fork definition.""" + +from framework.forks import BaseFork + + +class Devnet(BaseFork): + """ + Devnet fork for lean Ethereum consensus layer. + + This is the initial fork for the lean Ethereum protocol. + """ + + @classmethod + def name(cls) -> str: + """Return the fork name.""" + return "Devnet" diff --git a/packages/testing/src/consensus_testing/forks/helpers.py b/packages/testing/src/consensus_testing/forks/helpers.py new file mode 100644 index 00000000..dcf278f8 --- /dev/null +++ b/packages/testing/src/consensus_testing/forks/helpers.py @@ -0,0 +1,55 @@ +"""Consensus layer fork discovery and helpers.""" + +from typing import FrozenSet, Set, Type + +from framework.forks import BaseFork +from framework.forks.helpers import ( + get_all_forks, + get_forks_with_no_parents, + get_from_until_fork_set, +) +from framework.forks.helpers import ( + get_fork_by_name as _get_fork_by_name, +) +from framework.forks.helpers import ( + get_forks as _get_forks, +) + +from . import forks + +# Discover all consensus forks at module import time +ALL_FORKS: FrozenSet[Type[BaseFork]] = get_all_forks(forks) +"""All available consensus forks, excluding ignored forks.""" + + +def get_forks() -> Set[Type[BaseFork]]: + """ + Return the set of all available consensus forks. + + Returns: + Set of all non-ignored consensus fork classes. + """ + return _get_forks(ALL_FORKS) + + +def get_fork_by_name(fork_name: str) -> Type[BaseFork] | None: + """ + Get a consensus fork class by its name. + + Args: + fork_name: Name of the fork (case-insensitive). + + Returns: + The fork class, or None if not found. + """ + return _get_fork_by_name(ALL_FORKS, fork_name) + + +# Re-export the generic helpers for convenience +__all__ = [ + "ALL_FORKS", + "get_forks", + "get_fork_by_name", + "get_forks_with_no_parents", + "get_from_until_fork_set", +] diff --git a/packages/testing/src/consensus_testing/genesis.py b/packages/testing/src/consensus_testing/genesis.py new file mode 100644 index 00000000..45638ac7 --- /dev/null +++ b/packages/testing/src/consensus_testing/genesis.py @@ -0,0 +1,31 @@ +"""Consensus layer pre-state generation.""" + +from typing import Any + +from lean_spec.subspecs.containers.state import State, Validators +from lean_spec.subspecs.containers.validator import Validator +from lean_spec.types import Bytes52, Uint64 + + +def generate_pre_state(**kwargs: Any) -> State: + """ + Generate a default pre-state for consensus tests. + + Args: + **kwargs: Optional keyword arguments: + - genesis_time: The genesis timestamp (defaults to Uint64(0)). + - validators: Validators list (defaults to 4 validators with dummy pubkeys). + + Returns: + State: A properly initialized consensus state. + """ + genesis_time = kwargs.get("genesis_time", Uint64(0)) + + # If validators not provided, create a default set of 4 validators with dummy pubkeys + # TODO: Set an appropriate default here for test fixtures + if "validators" not in kwargs: + validators = Validators(data=[Validator(pubkey=Bytes52.zero()) for _ in range(4)]) + else: + validators = kwargs["validators"] + + return State.generate_genesis(genesis_time=genesis_time, validators=validators) diff --git a/packages/testing/src/consensus_testing/py.typed b/packages/testing/src/consensus_testing/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/packages/testing/src/consensus_testing/test_fixtures/__init__.py b/packages/testing/src/consensus_testing/test_fixtures/__init__.py new file mode 100644 index 00000000..1616c112 --- /dev/null +++ b/packages/testing/src/consensus_testing/test_fixtures/__init__.py @@ -0,0 +1,11 @@ +"""Consensus test fixture format definitions (Pydantic models).""" + +from .base import BaseConsensusFixture +from .fork_choice import ForkChoiceTest +from .state_transition import StateTransitionTest + +__all__ = [ + "BaseConsensusFixture", + "StateTransitionTest", + "ForkChoiceTest", +] diff --git a/packages/testing/src/consensus_testing/test_fixtures/base.py b/packages/testing/src/consensus_testing/test_fixtures/base.py new file mode 100644 index 00000000..1d6e8e4c --- /dev/null +++ b/packages/testing/src/consensus_testing/test_fixtures/base.py @@ -0,0 +1,30 @@ +"""Base fixture definitions for consensus test formats.""" + +from typing import Any, ClassVar + +from framework.test_fixtures import BaseFixture + + +class BaseConsensusFixture(BaseFixture): + """ + Base class for all consensus test fixtures. + + Inherits shared functionality from framework.fixtures.BaseFixture + and adds consensus-specific behavior if needed. + """ + + # Class-level registry of all consensus fixture formats + # Override parent's formats to maintain a separate registry + formats: ClassVar[dict[str, type["BaseConsensusFixture"]]] = {} # type: ignore[assignment] + + @classmethod + def __pydantic_init_subclass__(cls, **kwargs: Any) -> None: + """ + Auto-register consensus fixture formats when subclasses are defined. + + Overrides parent to register in BaseConsensusFixture.formats instead + of BaseFixture.formats. + """ + super().__pydantic_init_subclass__(**kwargs) + if cls.format_name: + BaseConsensusFixture.formats[cls.format_name] = cls diff --git a/packages/testing/src/consensus_testing/test_fixtures/fork_choice.py b/packages/testing/src/consensus_testing/test_fixtures/fork_choice.py new file mode 100644 index 00000000..80a121e9 --- /dev/null +++ b/packages/testing/src/consensus_testing/test_fixtures/fork_choice.py @@ -0,0 +1,264 @@ +"""Fork choice test fixture format.""" + +from typing import ClassVar, List + +from pydantic import model_validator + +from lean_spec.subspecs.chain.config import SECONDS_PER_SLOT +from lean_spec.subspecs.containers.attestation import Attestation, AttestationData +from lean_spec.subspecs.containers.block.block import ( + Block, + BlockBody, + BlockWithAttestation, + SignedBlockWithAttestation, +) +from lean_spec.subspecs.containers.block.types import Attestations, BlockSignatures +from lean_spec.subspecs.containers.checkpoint import Checkpoint +from lean_spec.subspecs.containers.state.state import State +from lean_spec.subspecs.forkchoice import Store +from lean_spec.subspecs.ssz import hash_tree_root +from lean_spec.types import Bytes32, Bytes4000, Uint64, ValidatorIndex + +from ..block_spec import BlockSpec +from ..test_types import AttestationStep, BlockStep, ForkChoiceStep, TickStep +from .base import BaseConsensusFixture + + +class ForkChoiceTest(BaseConsensusFixture): + """ + Test fixture for event-driven fork choice scenarios. + + Tests the fork choice Store through a sequence of events: + - on_tick: Time advancement + - on_block: Block arrival + - on_attestation: Attestation arrival (from gossip) + - checks: Store state validation + + This tests LMD-GHOST algorithm, proposer boost, reorgs, and + timing-sensitive behavior. + + Structure: + anchor_state: Initial trusted state + anchor_block: Initial trusted block + steps: Sequence of events and checks + """ + + format_name: ClassVar[str] = "fork_choice_test" + description: ClassVar[str] = "Tests event-driven fork choice through Store operations" + + anchor_state: State | None = None + """ + The initial trusted consensus state. + + If not provided, the framework will use the genesis fixture from pytest. + This allows tests to omit genesis for simpler test code while still + allowing customization when needed. + """ + + anchor_block: Block | None = None + """ + The initial trusted block (unsigned). + + If not provided, will be auto-generated from anchor_state's latest_block_header. + This is typically the genesis block. + """ + + steps: List[ForkChoiceStep] + """ + Sequence of fork choice events to process. + + Events are processed in order, with store state carrying forward. + """ + + @model_validator(mode="after") + def set_anchor_block_default(self) -> "ForkChoiceTest": + """ + Auto-generate anchor_block from anchor_state if not provided. + + This creates a block from the state's latest_block_header, which is + typically the genesis block. The state_root is set to the hash of the + anchor_state itself. + + Note: anchor_state can be None at this point - it will be injected + by the pytest plugin before make_fixture() is called. + """ + if self.anchor_block is None and self.anchor_state is not None: + self.anchor_block = Block( + slot=self.anchor_state.latest_block_header.slot, + proposer_index=self.anchor_state.latest_block_header.proposer_index, + parent_root=self.anchor_state.latest_block_header.parent_root, + state_root=hash_tree_root(self.anchor_state), + body=BlockBody(attestations=Attestations(data=[])), + ) + return self + + def make_fixture(self) -> "ForkChoiceTest": + """ + Generate the fixture by running the spec's Store. + + This validates the test by: + 1. Initializing Store from anchor_state and anchor_block + 2. Processing each step through Store methods (building blocks from specs as needed) + 3. Validating check assertions against Store state + + Returns: + ------- + ForkChoiceTest + The validated fixture (self, since steps contain the test). + + Raises: + ------ + AssertionError + If any step fails unexpectedly or checks don't match Store state. + """ + # Ensure anchor_state and anchor_block are set + assert self.anchor_state is not None, "anchor_state must be set before make_fixture" + assert self.anchor_block is not None, "anchor_block must be set before make_fixture" + + # Initialize Store from anchor + store = Store.get_forkchoice_store( + state=self.anchor_state, + anchor_block=self.anchor_block, + ) + + # Process each step + for i, step in enumerate(self.steps): + try: + if isinstance(step, TickStep): + # Advance time + store.advance_time(Uint64(step.time), has_proposal=False) + + elif isinstance(step, BlockStep): + # Build SignedBlockWithAttestation from BlockSpec + signed_block = self._build_block_from_spec(step.block, store) + + # Store the filled Block for serialization + block = signed_block.message.block + step._filled_block = block + + # Automatically advance time to block's slot before processing + # Compute the time corresponding to the block's slot + block_time = store.config.genesis_time + block.slot * Uint64(SECONDS_PER_SLOT) + + # Use spec's advance_time method to handle time progression + store.advance_time(block_time, has_proposal=True) + + # Process the block (which calls state_transition internally) + store.process_block(signed_block) + + elif isinstance(step, AttestationStep): + # Process attestation from gossip (not from block) + store.process_attestation(step.attestation, is_from_block=False) + + else: + raise ValueError(f"Step {i}: unknown step type {type(step).__name__}") + + # Validate checks if provided + if step.checks is not None: + step.checks.validate_against_store(store, step_index=i) + + except Exception as e: + if step.valid: + # Expected to succeed but failed + raise AssertionError( + f"Step {i} ({type(step).__name__}) failed unexpectedly: {e}" + ) from e + # Expected to fail, continue + continue + + # If we expected failure but succeeded, that's an error + if not step.valid: + raise AssertionError( + f"Step {i} ({type(step).__name__}) succeeded but expected failure" + ) + + # Return self (fixture is already complete) + return self + + def _build_block_from_spec(self, spec: BlockSpec, store: Store) -> SignedBlockWithAttestation: + """ + Build a full SignedBlockWithAttestation from a lightweight BlockSpec. + + Builds blocks via state transition dry-run, similar to state transition tests, + but also creates a proper proposer attestation for fork choice. + This mimics what a local block builder would do. + + TODO: We cannot use Store.produce_block_with_signatures() because it has + side effects (adds block to store at lines 556-559 of store.py). If the spec + is refactored to separate block production from store updates, we should use + that method instead. Until then, this manual approach is necessary. + + Parameters + ---------- + spec : BlockSpec + The lightweight block specification. + store : Store + The fork choice store (used to get head state and latest justified). + + Returns: + ------- + SignedBlockWithAttestation + A complete signed block ready for processing. + """ + # Determine proposer + if spec.proposer_index is None: + validator_count = store.states[store.head].validators.count + proposer_index = ValidatorIndex(int(spec.slot) % int(validator_count)) + else: + proposer_index = spec.proposer_index + + # Get the current head state from the store + head_state = store.states[store.head] + + # Dry-run to build block with correct state root + temp_state = head_state.process_slots(spec.slot) + parent_root = hash_tree_root(temp_state.latest_block_header) + + # Build body (empty for now, attestations can be added later if needed) + body = BlockBody(attestations=Attestations(data=[])) + + # Create temporary block for dry-run + temp_block = Block( + slot=spec.slot, + proposer_index=proposer_index, + parent_root=parent_root, + state_root=Bytes32.zero(), + body=body, + ) + + # Process to get correct state root + post_state = temp_state.process_block(temp_block) + correct_state_root = hash_tree_root(post_state) + + # Create final block + final_block = Block( + slot=spec.slot, + proposer_index=proposer_index, + parent_root=parent_root, + state_root=correct_state_root, + body=body, + ) + + # Create proposer attestation for this block + block_root = hash_tree_root(final_block) + proposer_attestation = Attestation( + validator_id=proposer_index, + data=AttestationData( + slot=spec.slot, + head=Checkpoint(root=block_root, slot=spec.slot), + target=Checkpoint(root=block_root, slot=spec.slot), + # Use the anchor block as source for genesis case + source=Checkpoint(root=parent_root, slot=temp_state.latest_block_header.slot), + ), + ) + + # Create signed structure with placeholder signatures + # One signature for proposer attestation + one for the block + signature_list = [Bytes4000.zero(), Bytes4000.zero()] + return SignedBlockWithAttestation( + message=BlockWithAttestation( + block=final_block, + proposer_attestation=proposer_attestation, + ), + signature=BlockSignatures(data=signature_list), + ) diff --git a/packages/testing/src/consensus_testing/test_fixtures/state_transition.py b/packages/testing/src/consensus_testing/test_fixtures/state_transition.py new file mode 100644 index 00000000..aaaf18ef --- /dev/null +++ b/packages/testing/src/consensus_testing/test_fixtures/state_transition.py @@ -0,0 +1,244 @@ +"""State transition test fixture format.""" + +from typing import Any, ClassVar, List + +from pydantic import ConfigDict, PrivateAttr, field_serializer + +from lean_spec.subspecs.containers.block.block import Block, BlockBody +from lean_spec.subspecs.containers.block.types import Attestations +from lean_spec.subspecs.containers.state.state import State +from lean_spec.subspecs.ssz.hash import hash_tree_root +from lean_spec.types import Bytes32, ValidatorIndex + +from ..block_spec import BlockSpec +from ..test_types import StateExpectation +from .base import BaseConsensusFixture + + +class StateTransitionTest(BaseConsensusFixture): + """ + Test fixture for block processing through state_transition(). + + This is the primary test type that covers: + - Operations (attestations via blocks) + - Slot advancement (empty slots) + - Multi-block sequences + - Justification and finalization + - Invalid blocks + + Tests everything through the main state_transition() public API. + + Structure: + pre: Initial consensus state + blocks: Sequence of signed blocks to process + post: Expected state after processing (None if invalid, filled by spec) + expect_exception: Expected exception for invalid tests + """ + + format_name: ClassVar[str] = "state_transition_test" + description: ClassVar[str] = ( + "Tests block processing through state_transition() - covers operations, " + "epochs, and finality" + ) + + model_config = ConfigDict(arbitrary_types_allowed=True) + + pre: State + """The initial consensus state before processing.""" + + blocks: List[BlockSpec] + """ + Block specifications to process through the spec. + + Tests provide a list of BlockSpec objects with required slots and optional + field overrides. The framework fills complete Block objects during + make_fixture() and stores them in the private _filled_blocks attribute. + """ + + # TODO: We should figure out a configuration to raise if a private attr is + # attempted to be set during model initialization. + _filled_blocks: List[Block] = PrivateAttr(default_factory=list) + """ + The filled Blocks, processed through the specs. + + This is a private attribute not part of the model schema. Tests cannot set this. + The framework populates it during make_fixture(). + """ + + post: StateExpectation | None = None + """ + Expected state after processing all blocks. + + Only fields explicitly set in the StateExpectation will be validated. + If None, no post-state validation is performed (e.g., for invalid tests). + """ + + expect_exception: type[Exception] | None = None + """Expected exception type for invalid tests.""" + + @field_serializer("blocks", when_used="json") + def serialize_blocks(self, value: List[BlockSpec]) -> List[dict[str, Any]]: + """ + Serialize the filled `Block`s instead of the `BlockSpec`s. + + This ensures the fixture output contains the complete `Blocks` that were + filled from the specs, not the input `BlockSpec`s. + + Parameters: + ---------- + value : List[BlockSpec] + The BlockSpec list (ignored, we use _filled_blocks instead). + + Returns: + ------- + List[dict[str, Any]] + The serialized Blocks. + """ + del value + return [block.model_dump(mode="json") for block in self._filled_blocks] + + @field_serializer("expect_exception", when_used="json") + def serialize_exception(self, value: type[Exception] | None) -> str | None: + """Serialize exception type to string.""" + if value is None: + return None + # Format: "ExceptionClassName" (just the class name for now) + # TODO: This can be used to map exceptions to expected exceptions from clients + # as in execution-spec-tests - e.g., "StateTransitionException.INVALID_SLOT" + return value.__name__ + + def make_fixture(self) -> "StateTransitionTest": + """ + Generate the fixture by running the spec. + + Builds blocks from BlockSpec if needed, then processes them through state_transition. + + Returns: + ------- + StateTransitionTest + A validated fixture. + + Raises: + ------ + AssertionError + If processing fails unexpectedly or validation fails. + """ + actual_post_state: State | None = None + exception_raised: Exception | None = None + + # Initialize filled_blocks list that will be populated as we process blocks + filled_blocks: list[Block] = [] + + try: + state = self.pre + + for block_spec in self.blocks: + # Fill Block from BlockSpec + block = self._build_block_from_spec(block_spec, state) + + # Store the filled Block for serialization + filled_blocks.append(block) + + # Process block through state transition + state = state.state_transition( + block=block, + valid_signatures=True, + ) + + actual_post_state = state + except (AssertionError, ValueError) as e: + exception_raised = e + # If we expect an exception, this is fine + if self.expect_exception is None: + # Unexpected failure + raise AssertionError(f"Unexpected error processing blocks: {e}") from e + finally: + # Always store filled blocks for serialization, even if an exception occurred + # This ensures the test fixture includes all blocks that were attempted + self._filled_blocks = filled_blocks + + # Validate exception expectations + if self.expect_exception is not None: + if exception_raised is None: + raise AssertionError( + f"Expected exception {self.expect_exception.__name__} but processing succeeded" + ) + if not isinstance(exception_raised, self.expect_exception): + raise AssertionError( + f"Expected {self.expect_exception.__name__} " + f"but got {type(exception_raised).__name__}: {exception_raised}" + ) + + # Validate post-state expectations if provided + if self.post is not None and actual_post_state is not None: + self.post.validate_against_state(actual_post_state) + + # Return self (fixture is already complete) + return self + + def _build_block_from_spec(self, spec: BlockSpec, state: State) -> Block: + """ + Build a Block from a BlockSpec for state transition tests. + + Uses provided fields from spec, computes any missing fields. + This mimics what a local block builder would do. + + TODO: If the spec implements a State.produce_block() method in the future, + we should use that instead of manually computing fields here. Until then, + this manual approach is necessary. + + Parameters + ---------- + spec : BlockSpec + The block specification with optional field overrides. + state : State + The current state to build against. + + Returns: + ------- + Block + A complete block ready for state_transition. + """ + # Use provided proposer_index or compute it + if spec.proposer_index is not None: + proposer_index = spec.proposer_index + else: + proposer_index = ValidatorIndex(int(spec.slot) % int(state.validators.count)) + + # Use provided parent_root or compute it + if spec.parent_root is not None: + parent_root = spec.parent_root + else: + temp_state = state.process_slots(spec.slot) + parent_root = hash_tree_root(temp_state.latest_block_header) + + # Use provided body or create empty one + if spec.body is not None: + body = spec.body + else: + body = BlockBody(attestations=Attestations(data=[])) + + # Use provided state_root or compute it via dry-run + if spec.state_root is not None: + state_root = spec.state_root + else: + # Need to dry-run to compute state_root + temp_state = state.process_slots(spec.slot) + temp_block = Block( + slot=spec.slot, + proposer_index=proposer_index, + parent_root=parent_root, + state_root=Bytes32.zero(), + body=body, + ) + post_state = temp_state.process_block(temp_block) + state_root = hash_tree_root(post_state) + + # Create final block with all fields + return Block( + slot=spec.slot, + proposer_index=proposer_index, + parent_root=parent_root, + state_root=state_root, + body=body, + ) diff --git a/packages/testing/src/consensus_testing/test_types/__init__.py b/packages/testing/src/consensus_testing/test_types/__init__.py new file mode 100644 index 00000000..cd03f99f --- /dev/null +++ b/packages/testing/src/consensus_testing/test_types/__init__.py @@ -0,0 +1,21 @@ +"""Test types for consensus test fixtures.""" + +from .state_expectation import StateExpectation +from .step_types import ( + AttestationStep, + BaseForkChoiceStep, + BlockStep, + ForkChoiceStep, + TickStep, +) +from .store_checks import StoreChecks + +__all__ = [ + "StateExpectation", + "StoreChecks", + "BaseForkChoiceStep", + "TickStep", + "BlockStep", + "AttestationStep", + "ForkChoiceStep", +] diff --git a/packages/testing/src/consensus_testing/test_types/state_expectation.py b/packages/testing/src/consensus_testing/test_types/state_expectation.py new file mode 100644 index 00000000..ae00cc3e --- /dev/null +++ b/packages/testing/src/consensus_testing/test_types/state_expectation.py @@ -0,0 +1,118 @@ +"""State expectation model for selective validation in state transition tests.""" + +from typing import TYPE_CHECKING + +from pydantic import BaseModel + +from lean_spec.subspecs.containers.slot import Slot +from lean_spec.types import Bytes32 + +if TYPE_CHECKING: + from lean_spec.subspecs.containers.state import State + + +class StateExpectation(BaseModel): + """ + Expected State fields after state transition (selective validation). + + All fields are optional - only specified fields are validated. + Uses Pydantic's model_fields_set to track which fields were explicitly set. + + This allows test writers to specify only the fields they care about, + making tests more focused and maintainable. + + Example: + # Only validate slot and justified checkpoint + StateExpectation( + slot=Slot(10), + latest_justified_slot=Slot(8), + ) + """ + + slot: Slot | None = None + """Expected current slot.""" + + latest_justified_slot: Slot | None = None + """Expected latest justified checkpoint slot.""" + + latest_justified_root: Bytes32 | None = None + """Expected latest justified checkpoint root.""" + + latest_finalized_slot: Slot | None = None + """Expected latest finalized checkpoint slot.""" + + latest_finalized_root: Bytes32 | None = None + """Expected latest finalized checkpoint root.""" + + validator_count: int | None = None + """Expected number of validators.""" + + def validate_against_state(self, state: "State") -> None: + """ + Validate this expectation against actual State. + + Only validates fields that were explicitly set by the test writer. + Uses Pydantic's model_fields_set to determine which fields to check. + + Parameters: + ---------- + state : State + The actual state to validate against. + + Raises: + ------ + AssertionError + If any explicitly set field doesn't match the actual state value. + """ + # Get the set of fields that were explicitly provided + fields_to_check = self.model_fields_set + + for field_name in fields_to_check: + expected_value = getattr(self, field_name) + + if field_name == "slot": + actual = state.slot + if actual != expected_value: + raise AssertionError( + f"State validation failed: slot = {actual}, expected {expected_value}" + ) + + elif field_name == "latest_justified_slot": + actual = state.latest_justified.slot + if actual != expected_value: + raise AssertionError( + f"State validation failed: latest_justified.slot = {actual}, " + f"expected {expected_value}" + ) + + elif field_name == "latest_justified_root": + actual_root = state.latest_justified.root + if actual_root != expected_value: + raise AssertionError( + f"State validation failed: latest_justified.root = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) + + elif field_name == "latest_finalized_slot": + actual = state.latest_finalized.slot + if actual != expected_value: + raise AssertionError( + f"State validation failed: latest_finalized.slot = {actual}, " + f"expected {expected_value}" + ) + + elif field_name == "latest_finalized_root": + actual_root = state.latest_finalized.root + if actual_root != expected_value: + raise AssertionError( + f"State validation failed: latest_finalized.root = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) + + elif field_name == "validator_count": + actual_count = state.validators.count + if actual_count != expected_value: + raise AssertionError( + f"State validation failed: validator_count = {actual_count}, " + f"expected {expected_value}" + ) diff --git a/packages/testing/src/consensus_testing/test_types/step_types.py b/packages/testing/src/consensus_testing/test_types/step_types.py new file mode 100644 index 00000000..91784bf4 --- /dev/null +++ b/packages/testing/src/consensus_testing/test_types/step_types.py @@ -0,0 +1,135 @@ +"""Step types for fork choice tests.""" + +from typing import Annotated, Any, Literal, Union + +from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, field_serializer + +from lean_spec.subspecs.containers import SignedAttestation +from lean_spec.subspecs.containers.block.block import Block + +from ..block_spec import BlockSpec +from .store_checks import StoreChecks + + +class BaseForkChoiceStep(BaseModel): + """ + Base class for fork choice event steps. + + All step types inherit from this base and include: + - valid flag for expected success/failure + - optional Store state checks to validate after processing + """ + + valid: bool = True + """Whether this step is expected to succeed.""" + + checks: StoreChecks | None = None + """ + Store state checks to validate after processing this step. + + If provided, the fixture will validate the Store state matches + these checks after executing the step. + Only fields that are explicitly set will be validated. + """ + + +class TickStep(BaseForkChoiceStep): + """ + Time advancement step. + + Advances the fork choice store time to a specific unix timestamp. + This triggers interval-based actions like vote processing. + """ + + step_type: Literal["tick"] = "tick" + """Discriminator field for serialization.""" + + time: int + """Time to advance to (unix timestamp).""" + + +class BlockStep(BaseForkChoiceStep): + """ + Block processing step. + + Processes a block through the fork choice store. + This updates the store's block tree and may trigger head updates. + + Input: BlockSpec (can be partial or fully specified). + Output: Block object built and processed through the spec. + """ + + model_config = ConfigDict(arbitrary_types_allowed=True) + + step_type: Literal["block"] = "block" + """Discriminator field for serialization.""" + + block: BlockSpec + """ + Block specification for this step. + + Tests provide a BlockSpec with required slot and optional field overrides. + The framework fills a complete Block during make_fixture() and stores it + in the private _filled_block attribute for serialization. + """ + + # TODO: We should figure out a configuration to raise if a private attr is + # attempted to be set during model initialization. + _filled_block: Block | None = PrivateAttr(default=None) + """The filled Block, processed through the spec.""" + + @field_serializer("block", when_used="json") + def serialize_block(self, value: BlockSpec) -> dict[str, Any]: + """ + Serialize the filled Block instead of the BlockSpec. + + This ensures the fixture output contains the complete Block that was + filled from the spec, not the input BlockSpec. + + Parameters: + ---------- + value : BlockSpec + The BlockSpec field value (ignored, we use _filled_block instead). + + Returns: + ------- + dict[str, Any] + The serialized Block. + + Raises: + ------ + ValueError + If _filled_block is None (make_fixture not called yet). + """ + del value + if self._filled_block is None: + raise ValueError( + "Block not filled yet - make_fixture() must be called before serialization. " + "This BlockStep should only be serialized after the fixture has been processed." + ) + return self._filled_block.model_dump(mode="json") + + +class AttestationStep(BaseForkChoiceStep): + """ + Attestation processing step. + + Processes an attestation (signed vote) received from gossip. + This updates validator vote tracking in the store. + + Note: Attestations included in blocks are processed automatically + when the block is processed. This step is for gossip attestations. + """ + + step_type: Literal["attestation"] = "attestation" + """Discriminator field for serialization.""" + + attestation: SignedAttestation + """Attestation (SignedAttestation) to process from gossip.""" + + +# Discriminated union type for all fork choice steps +ForkChoiceStep = Annotated[ + Union[TickStep, BlockStep, AttestationStep], + Field(discriminator="step_type"), +] diff --git a/packages/testing/src/consensus_testing/test_types/store_checks.py b/packages/testing/src/consensus_testing/test_types/store_checks.py new file mode 100644 index 00000000..df524ae6 --- /dev/null +++ b/packages/testing/src/consensus_testing/test_types/store_checks.py @@ -0,0 +1,141 @@ +"""Store checks model for selective validation in fork choice tests.""" + +from typing import TYPE_CHECKING + +from pydantic import BaseModel + +from lean_spec.subspecs.containers.slot import Slot +from lean_spec.types import Bytes32, Uint64 + +if TYPE_CHECKING: + from lean_spec.subspecs.forkchoice.store import Store + + +class StoreChecks(BaseModel): + """ + Store state checks for fork choice tests. + + All fields are optional - only specified fields are validated. + Uses Pydantic's model_fields_set to track which fields were explicitly set. + + This allows test writers to specify only the fields they care about, + making tests more focused and maintainable. + + Example: + # Only validate head slot and justified checkpoint + StoreChecks( + head_slot=Slot(5), + latest_justified_slot=Slot(4), + ) + """ + + time: Uint64 | None = None + """Expected store time (in intervals since genesis).""" + + head_slot: Slot | None = None + """Expected head block slot.""" + + head_root: Bytes32 | None = None + """Expected head block root.""" + + latest_justified_slot: Slot | None = None + """Expected latest justified checkpoint slot.""" + + latest_justified_root: Bytes32 | None = None + """Expected latest justified checkpoint root.""" + + latest_finalized_slot: Slot | None = None + """Expected latest finalized checkpoint slot.""" + + latest_finalized_root: Bytes32 | None = None + """Expected latest finalized checkpoint root.""" + + safe_target: Bytes32 | None = None + """Expected safe target root.""" + + def validate_against_store(self, store: "Store", step_index: int) -> None: + """ + Validate these checks against actual Store state. + + Only validates fields that were explicitly set by the test writer. + Uses Pydantic's model_fields_set to determine which fields to check. + + Parameters: + ---------- + store : Store + The fork choice store to validate against. + step_index : int + Index of the step being validated (for error messages). + + Raises: + ------ + AssertionError + If any explicitly set field doesn't match the actual store value. + """ + # Get the set of fields that were explicitly provided + fields_to_check = self.model_fields_set + + for field_name in fields_to_check: + expected_value = getattr(self, field_name) + + if field_name == "time": + actual = store.time + if actual != expected_value: + raise AssertionError( + f"Step {step_index}: time = {actual}, expected {expected_value}" + ) + + elif field_name == "head_slot": + actual = store.blocks[store.head].slot + if actual != expected_value: + raise AssertionError( + f"Step {step_index}: head.slot = {actual}, expected {expected_value}" + ) + + elif field_name == "head_root": + actual_root = store.head + if actual_root != expected_value: + raise AssertionError( + f"Step {step_index}: head.root = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) + + elif field_name == "latest_justified_slot": + actual = store.latest_justified.slot + if actual != expected_value: + raise AssertionError( + f"Step {step_index}: latest_justified.slot = {actual}, " + f"expected {expected_value}" + ) + + elif field_name == "latest_justified_root": + actual_root = store.latest_justified.root + if actual_root != expected_value: + raise AssertionError( + f"Step {step_index}: latest_justified.root = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) + + elif field_name == "latest_finalized_slot": + actual = store.latest_finalized.slot + if actual != expected_value: + raise AssertionError( + f"Step {step_index}: latest_finalized.slot = {actual}, " + f"expected {expected_value}" + ) + + elif field_name == "latest_finalized_root": + actual_root = store.latest_finalized.root + if actual_root != expected_value: + raise AssertionError( + f"Step {step_index}: latest_finalized.root = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) + + elif field_name == "safe_target": + actual_root = store.safe_target + if actual_root != expected_value: + raise AssertionError( + f"Step {step_index}: safe_target = 0x{actual_root.hex()}, " + f"expected 0x{expected_value.hex()}" + ) diff --git a/packages/testing/src/framework/__init__.py b/packages/testing/src/framework/__init__.py new file mode 100644 index 00000000..7b81edbc --- /dev/null +++ b/packages/testing/src/framework/__init__.py @@ -0,0 +1,12 @@ +""" +Shared testing infrastructure for Ethereum consensus and execution layers. + +This module provides base classes and utilities that are common across +both consensus and execution layer testing. +""" + +from framework.base_types import CamelModel + +__all__ = [ + "CamelModel", +] diff --git a/packages/testing/src/framework/base_types.py b/packages/testing/src/framework/base_types.py new file mode 100644 index 00000000..39b9eed1 --- /dev/null +++ b/packages/testing/src/framework/base_types.py @@ -0,0 +1,30 @@ +"""Base Pydantic models for Ethereum test fixtures.""" + +from typing import Any + +from pydantic import BaseModel, ConfigDict +from pydantic.alias_generators import to_camel +from typing_extensions import Self + + +class CamelModel(BaseModel): + """ + A base model that converts field names to camel case when serializing. + + For example, the field name `current_slot` in a Python model will be + represented as `currentSlot` when it is serialized to JSON. + + This is used across both consensus and execution layer test fixtures + to maintain consistency with Ethereum test format specifications. + """ + + model_config = ConfigDict( + alias_generator=to_camel, + populate_by_name=True, + validate_default=True, + arbitrary_types_allowed=True, + ) + + def copy(self: Self, **kwargs: Any) -> Self: + """Create a copy of the model with the updated fields that are validated.""" + return self.__class__(**(self.model_dump(exclude_unset=True) | kwargs)) diff --git a/packages/testing/src/framework/cli/__init__.py b/packages/testing/src/framework/cli/__init__.py new file mode 100644 index 00000000..e55bf34b --- /dev/null +++ b/packages/testing/src/framework/cli/__init__.py @@ -0,0 +1 @@ +"""CLI tools for Ethereum test fixture generation.""" diff --git a/packages/testing/src/framework/cli/fill.py b/packages/testing/src/framework/cli/fill.py new file mode 100644 index 00000000..5b79c696 --- /dev/null +++ b/packages/testing/src/framework/cli/fill.py @@ -0,0 +1,92 @@ +"""Unified CLI command for generating Ethereum test fixtures across all layers.""" + +import sys +from pathlib import Path +from typing import Sequence + +import click +import pytest + + +@click.command( + context_settings={ + "ignore_unknown_options": True, + "allow_extra_args": True, + } +) +@click.argument("pytest_args", nargs=-1, type=click.UNPROCESSED) +@click.option( + "--output", + "-o", + default="fixtures", + help="Output directory for generated fixtures", +) +@click.option( + "--fork", + required=True, + help="Fork to generate fixtures for (e.g., Devnet for consensus, Shanghai for execution)", +) +@click.option( + "--layer", + type=click.Choice(["consensus", "execution"], case_sensitive=False), + default="consensus", + help="Ethereum layer to generate fixtures for (default: consensus)", +) +@click.option( + "--clean", + is_flag=True, + help="Clean output directory before generating", +) +@click.pass_context +def fill( + ctx: click.Context, + pytest_args: Sequence[str], + output: str, + fork: str, + layer: str, + clean: bool, +) -> None: + """ + Generate Ethereum test fixtures from test specifications. + + This unified command works across both consensus and execution layers. + The --layer flag determines which layer's forks and fixtures to use. + + Examples: + # Generate consensus layer fixtures + fill tests/spec_tests/devnet --fork=Devnet --layer=consensus --clean -v + + # Generate execution layer fixtures (future) + fill tests/spec_tests/shanghai --fork=Shanghai --layer=execution --clean -v + + # Default layer is consensus + fill tests/spec_tests/devnet --fork=Devnet --clean -v + """ + # Look for pytest-fill.ini in current directory (project root) + config_path = Path.cwd() / "pytest-fill.ini" + + # Build pytest arguments + args = [ + "-c", + str(config_path), + f"--output={output}", + f"--fork={fork}", + f"--layer={layer}", + ] + + if clean: + args.append("--clean") + + # Add all pytest args + args.extend(pytest_args) + + # Add extra click context args + args.extend(ctx.args) + + # Run pytest + exit_code = pytest.main(args) + sys.exit(exit_code) + + +if __name__ == "__main__": + fill() diff --git a/packages/testing/src/framework/forks/__init__.py b/packages/testing/src/framework/forks/__init__.py new file mode 100644 index 00000000..4f725630 --- /dev/null +++ b/packages/testing/src/framework/forks/__init__.py @@ -0,0 +1,22 @@ +"""Base fork infrastructure for Ethereum testing.""" + +from framework.forks.base import BaseFork, BaseForkMeta +from framework.forks.helpers import ( + discover_forks, + get_all_forks, + get_fork_by_name, + get_forks, + get_forks_with_no_parents, + get_from_until_fork_set, +) + +__all__ = [ + "BaseFork", + "BaseForkMeta", + "discover_forks", + "get_all_forks", + "get_fork_by_name", + "get_forks", + "get_forks_with_no_parents", + "get_from_until_fork_set", +] diff --git a/packages/testing/src/framework/forks/base.py b/packages/testing/src/framework/forks/base.py new file mode 100644 index 00000000..113b1c6c --- /dev/null +++ b/packages/testing/src/framework/forks/base.py @@ -0,0 +1,129 @@ +"""Base fork class for Ethereum layer testing.""" + +from abc import ABC, ABCMeta, abstractmethod +from typing import ClassVar, Set, Type + + +class BaseForkMeta(ABCMeta): + """ + Metaclass for BaseFork enabling fork comparisons via inheritance. + + Fork comparisons work by checking subclass relationships. + For example, if ForkB inherits from ForkA, then ForkA < ForkB. + + This metaclass is shared across both consensus and execution layers, + allowing consistent fork comparison logic regardless of layer. + """ + + @abstractmethod + def name(cls) -> str: + """Return the name of the fork.""" + pass + + def __repr__(cls) -> str: + """Print the name of the fork, instead of the class.""" + return cls.name() + + def __gt__(cls, other: "BaseForkMeta") -> bool: + """Check if this fork is newer than another (cls > other).""" + return cls is not other and BaseForkMeta._is_subclass_of(cls, other) + + def __ge__(cls, other: "BaseForkMeta") -> bool: + """Check if this fork is newer or equal to another (cls >= other).""" + return cls is other or BaseForkMeta._is_subclass_of(cls, other) + + def __lt__(cls, other: "BaseForkMeta") -> bool: + """Check if this fork is older than another (cls < other).""" + return cls is not other and BaseForkMeta._is_subclass_of(other, cls) + + def __le__(cls, other: "BaseForkMeta") -> bool: + """Check if this fork is older or equal to another (cls <= other).""" + return cls is other or BaseForkMeta._is_subclass_of(other, cls) + + @staticmethod + def _is_subclass_of(a: "BaseForkMeta", b: "BaseForkMeta") -> bool: + """ + Check if fork `a` is a subclass of fork `b`. + + For transition forks, checks if the destination fork is a subclass. + """ + # Handle transition forks by checking their destination + a = BaseForkMeta._maybe_transitioned(a) + b = BaseForkMeta._maybe_transitioned(b) + return issubclass(a, b) + + @staticmethod + def _maybe_transitioned(fork_cls: "BaseForkMeta") -> "BaseForkMeta": + """ + Return the destination fork if this is a transition fork. Otherwise, + return the fork as-is. + """ + if hasattr(fork_cls, "transitions_to"): + return fork_cls.transitions_to() # type: ignore[no-any-return] + return fork_cls + + +class BaseFork(ABC, metaclass=BaseForkMeta): + """ + Base class for Ethereum layer forks. + + Each fork represents a specific version of the protocol (consensus or execution). + Forks form an inheritance hierarchy where newer forks inherit from older ones. + + This base class is shared across both consensus and execution layers, but each + layer will define its own fork hierarchy with different fork names and properties. + """ + + # Fork metadata + _ignore: ClassVar[bool] = False + """If True, this fork will be excluded from the primary fork set.""" + + _children: ClassVar[Set[Type["BaseFork"]]] = set() + """Set of forks that directly inherit from this fork.""" + + def __init_subclass__( + cls, + *, + ignore: bool = False, + ) -> None: + """ + Initialize fork subclass with metadata. + + Args: + ignore: If True, exclude this fork from ALL_FORKS. + """ + super().__init_subclass__() + cls._ignore = ignore + cls._children = set() + + # Track parent-child relationships + base_class = cls.__bases__[0] + if base_class != BaseFork and hasattr(base_class, "_children"): + base_class._children.add(cls) + + @classmethod + @abstractmethod + def name(cls) -> str: + """ + Return the name of the fork as it appears in test fixtures. + + By default, this is the class name (e.g., "Devnet" for consensus, + "Shanghai" for execution). + This is used in the 'network' field of generated fixtures. + """ + pass + + @classmethod + def ignore(cls) -> bool: + """Return whether this fork should be ignored in test generation.""" + return cls._ignore + + @classmethod + def __str__(cls) -> str: + """Return string representation of the fork.""" + return cls.name() + + @classmethod + def __repr__(cls) -> str: + """Return repr of the fork.""" + return f"Fork({cls.name()})" diff --git a/packages/testing/src/framework/forks/helpers.py b/packages/testing/src/framework/forks/helpers.py new file mode 100644 index 00000000..e0aa0047 --- /dev/null +++ b/packages/testing/src/framework/forks/helpers.py @@ -0,0 +1,117 @@ +"""Generic fork helper functions for any Ethereum layer.""" + +from types import ModuleType +from typing import FrozenSet, List, Set, Type + +from framework.forks import BaseFork + + +def discover_forks(forks_module: ModuleType) -> List[Type[BaseFork]]: + """ + Discover all fork classes by scanning a forks module. + + Args: + forks_module: The module containing fork definitions (e.g., consensus_testing.forks.forks). + + Returns: + List of all BaseFork subclasses found in the module. + """ + discovered: List[Type[BaseFork]] = [] + for name in dir(forks_module): + obj = getattr(forks_module, name) + # Check if it's a type (class) and subclass of BaseFork (but not BaseFork itself) + if isinstance(obj, type) and issubclass(obj, BaseFork) and obj is not BaseFork: + discovered.append(obj) + return discovered + + +def get_all_forks(forks_module: ModuleType) -> FrozenSet[Type[BaseFork]]: + """ + Get all available forks from a forks module, excluding ignored forks. + + Args: + forks_module: The module containing fork definitions. + + Returns: + Frozen set of all non-ignored fork classes. + """ + all_forks = discover_forks(forks_module) + return frozenset(fork for fork in all_forks if not fork.ignore()) + + +def get_forks(all_forks: FrozenSet[Type[BaseFork]]) -> Set[Type[BaseFork]]: + """ + Convert a frozen set of forks to a regular set. + + Args: + all_forks: Frozen set of fork classes. + + Returns: + Set of fork classes. + """ + return set(all_forks) + + +def get_fork_by_name(all_forks: FrozenSet[Type[BaseFork]], fork_name: str) -> Type[BaseFork] | None: + """ + Get a fork class by its name. + + Args: + all_forks: Set of available forks to search. + fork_name: Name of the fork (case-insensitive). + + Returns: + The fork class, or None if not found. + """ + for fork in all_forks: + if fork.name().lower() == fork_name.lower(): + return fork + return None + + +def get_forks_with_no_parents(forks: Set[Type[BaseFork]]) -> Set[Type[BaseFork]]: + """ + Get all forks that have no parent forks in the given set. + + Args: + forks: Set of forks to search. + + Returns: + Set of forks with no parents (root forks). + """ + result: Set[Type[BaseFork]] = set() + for fork in forks: + has_parent = False + for other_fork in forks - {fork}: + if other_fork < fork: # other_fork is older than fork + has_parent = True + break + if not has_parent: + result.add(fork) + return result + + +def get_from_until_fork_set( + forks: Set[Type[BaseFork]], + forks_from: Set[Type[BaseFork]], + forks_until: Set[Type[BaseFork]], +) -> Set[Type[BaseFork]]: + """ + Get all forks in the range from forks_from to forks_until (inclusive). + + Args: + forks: The complete set of forks to filter. + forks_from: Start of the range (inclusive). + forks_until: End of the range (inclusive). + + Returns: + Set of forks in the specified range. + """ + result: Set[Type[BaseFork]] = set() + for fork_from in forks_from: + for fork_until in forks_until: + for fork in forks: + # Fork must be >= fork_from and <= fork_until + if fork >= fork_from and fork <= fork_until: + result.add(fork) + return result diff --git a/packages/testing/src/framework/pytest_plugins/__init__.py b/packages/testing/src/framework/pytest_plugins/__init__.py new file mode 100644 index 00000000..ae93da20 --- /dev/null +++ b/packages/testing/src/framework/pytest_plugins/__init__.py @@ -0,0 +1 @@ +"""Pytest plugins for Ethereum test fixture generation.""" diff --git a/packages/testing/src/framework/pytest_plugins/filler.py b/packages/testing/src/framework/pytest_plugins/filler.py new file mode 100644 index 00000000..b4c4aff2 --- /dev/null +++ b/packages/testing/src/framework/pytest_plugins/filler.py @@ -0,0 +1,604 @@ +"""Layer-agnostic pytest plugin for generating Ethereum test fixtures.""" + +import importlib +import json +import shutil +import sys +from collections import defaultdict +from pathlib import Path +from typing import Any, List + +import pytest + + +class FixtureCollector: + """Collects generated fixtures and writes them to disk.""" + + def __init__(self, output_dir: Path, fork: str, layer: str): + """ + Initialize the fixture collector. + + Args: + output_dir: Root directory for generated fixtures. + fork: The fork name (e.g., "Devnet", "Shanghai"). + layer: The Ethereum layer (e.g., "consensus", "execution"). + """ + self.output_dir = output_dir + self.fork = fork + self.layer = layer + self.fixtures: List[tuple[str, str, Any, str]] = [] + + def add_fixture( + self, + test_name: str, + fixture_format: str, + fixture: Any, + test_nodeid: str, + config: pytest.Config | None = None, + ) -> None: + """ + Add a fixture to the collection. + + Args: + test_name: Name of the test that generated this fixture. + fixture_format: Format name (e.g., "state_transition_test"). + fixture: The fixture object. + test_nodeid: Complete pytest node ID. + config: Pytest config object to attach fixture path metadata. + """ + self.fixtures.append((test_name, fixture_format, fixture, test_nodeid)) + + if config is not None: + nodeid_parts = test_nodeid.split("::") + test_file_path = nodeid_parts[0] + func_name_with_params = nodeid_parts[1] if len(nodeid_parts) > 1 else "" + base_func_name = func_name_with_params.split("[")[0] + + test_file = Path(test_file_path) + # Extract test path relative to tests/{layer} + # e.g., tests/consensus/devnet/... -> devnet/... + layer = config.test_layer if hasattr(config, "test_layer") else "consensus" + + try: + relative_path = test_file.relative_to(f"tests/{layer}") + except ValueError: + # Fallback: try to extract from full path + relative_path = test_file + + test_path = relative_path.with_suffix("") + + # Build output path: fixtures/{layer}/{format}/{test_path} + format_dir = fixture_format.replace("_test", "") + fixture_dir = self.output_dir / layer / format_dir / test_path + fixture_path = fixture_dir / f"{base_func_name}.json" + + config.fixture_path_absolute = str(fixture_path.absolute()) # type: ignore[attr-defined] + config.fixture_path_relative = str(fixture_path.relative_to(self.output_dir)) # type: ignore[attr-defined] + config.fixture_format = fixture_format # type: ignore[attr-defined] + + def write_fixtures(self) -> None: + """Write all collected fixtures to disk, grouped by test function.""" + grouped: dict[tuple[str, str, str], list[tuple[str, Any, str]]] = defaultdict(list) + + for test_name, fixture_format, fixture, test_nodeid in self.fixtures: + nodeid_parts = test_nodeid.split("::") + test_file_path = nodeid_parts[0] + func_name_with_params = nodeid_parts[1] if len(nodeid_parts) > 1 else "" + base_func_name = func_name_with_params.split("[")[0] + + group_key = (test_file_path, base_func_name, fixture_format) + grouped[group_key].append((test_name, fixture, test_nodeid)) + + for (test_file_path, base_func_name, fixture_format), fixtures_list in grouped.items(): + test_file = Path(test_file_path) + + # Extract test path relative to tests/{layer} + # e.g., tests/consensus/devnet/... -> devnet/... + try: + relative_path = test_file.relative_to(f"tests/{self.layer}") + except ValueError: + # Fallback: use full path + relative_path = test_file + + test_path = relative_path.with_suffix("") + + # Build output path: fixtures/{layer}/{format}/{test_path} + format_dir = fixture_format.replace("_test", "") + fixture_dir = self.output_dir / self.layer / format_dir / test_path + fixture_dir.mkdir(parents=True, exist_ok=True) + + output_file = fixture_dir / f"{base_func_name}.json" + + all_tests = {} + for test_name, fixture, test_nodeid in fixtures_list: + del test_name + test_id = f"{test_nodeid}[fork_{self.fork}-{fixture_format}]" + fixture_dict = fixture.json_dict_with_info() + all_tests[test_id] = fixture_dict + + with open(output_file, "w") as f: + json.dump(all_tests, f, indent=4) + + +def pytest_addoption(parser: pytest.Parser) -> None: + """Add command-line options for fixture generation.""" + group = parser.getgroup("fill", "leanSpec fixture generation") + group.addoption( + "--output", + action="store", + default="fixtures", + help="Output directory for generated fixtures", + ) + group.addoption( + "--fork", + action="store", + required=True, + help="Fork to generate fixtures for", + ) + group.addoption( + "--layer", + action="store", + default="consensus", + help="Ethereum layer (consensus or execution, default: consensus)", + ) + group.addoption( + "--clean", + action="store_true", + default=False, + help="Clean output directory before generating", + ) + + +def pytest_ignore_collect(collection_path: Path, config: pytest.Config) -> bool | None: + """ + Ignore test collection for paths not in the current layer. + + This prevents pytest from collecting tests from other layers, + reducing overhead significantly when there are many tests. + """ + if not hasattr(config, "test_layer"): + return None + + layer = config.test_layer + + # Check if path is under tests/ directory + try: + relative_path = collection_path.relative_to(Path.cwd() / "tests") + except ValueError: + # Not under tests/, let pytest handle it normally + return None + + # If it's directly under tests/{layer}, don't ignore + if str(relative_path).startswith(layer): + return None + + # Check if it's a different layer directory or unit tests + parts = relative_path.parts + if parts: + # Known layer directories + known_layers = {"consensus", "execution"} + if parts[0] in known_layers: + # It's a different layer, ignore it + return True + # It's probably unit tests (tests/lean_spec), ignore during fill + return True + + return None + + +def pytest_configure(config: pytest.Config) -> None: + """Setup fixture generation session with layer-specific modules.""" + # Get layer and validate + layer = config.getoption("--layer", default="consensus").lower() + known_layers = {"consensus", "execution"} + if layer not in known_layers: + pytest.exit( + f"Invalid layer: {layer}. Must be one of: {', '.join(known_layers)}", + returncode=pytest.ExitCode.USAGE_ERROR, + ) + + # Store layer for later use (needed by pytest_ignore_collect hook) + config.test_layer = layer # type: ignore[attr-defined] + + # Dynamically import layer-specific package + try: + layer_module = importlib.import_module(f"{layer}_testing") + config.layer_module = layer_module # type: ignore[attr-defined] + except ImportError as e: + pytest.exit( + f"Failed to import {layer}_testing module: {e}", + returncode=pytest.ExitCode.USAGE_ERROR, + ) + + # Register layer-specific test fixture formats + _register_layer_fixtures(config, layer) + + # Register fork validity markers + config.addinivalue_line( + "markers", + "valid_from(fork): specifies from which fork a test case is valid", + ) + config.addinivalue_line( + "markers", + "valid_until(fork): specifies until which fork a test case is valid", + ) + config.addinivalue_line( + "markers", + "valid_at(fork): specifies at which fork a test case is valid", + ) + + # Get options + output_dir = Path(config.getoption("--output")) + fork_name = config.getoption("--fork") + clean = config.getoption("--clean") + + # Get available forks from layer-specific module + get_forks = layer_module.forks.get_forks + get_fork_by_name = layer_module.forks.get_fork_by_name + + available_forks = get_forks() + available_fork_names = sorted(fork.name() for fork in available_forks) + + # Validate fork + if not fork_name: + print("Error: --fork is required", file=sys.stderr) + print( + f"Available {layer} forks: {', '.join(available_fork_names)}", + file=sys.stderr, + ) + pytest.exit("Missing required --fork option.", returncode=pytest.ExitCode.USAGE_ERROR) + + fork_class = get_fork_by_name(fork_name) + if fork_class is None: + print( + f"Error: Unsupported fork for {layer} layer: {fork_name}\n", + file=sys.stderr, + ) + print( + f"Available {layer} forks: {', '.join(available_fork_names)}", + file=sys.stderr, + ) + pytest.exit("Invalid fork specified.", returncode=pytest.ExitCode.USAGE_ERROR) + + # Check output directory + if output_dir.exists() and any(output_dir.iterdir()): + if not clean: + contents = list(output_dir.iterdir())[:5] + summary = ", ".join(item.name for item in contents) + if len(list(output_dir.iterdir())) > 5: + summary += ", ..." + pytest.exit( + f"Output directory '{output_dir}' is not empty. " + f"Contains: {summary}. Use --clean to remove all existing files " + "or specify a different output directory.", + returncode=pytest.ExitCode.USAGE_ERROR, + ) + shutil.rmtree(output_dir) + + output_dir.mkdir(parents=True, exist_ok=True) + + # Create collector with layer info + config.fixture_collector = FixtureCollector(output_dir, fork_name, layer) # type: ignore[attr-defined] + config.test_fork_class = fork_class # type: ignore[attr-defined] + + +def pytest_collection_modifyitems(config: pytest.Config, items: List[pytest.Item]) -> None: + """Modify collected test items to deselect tests not valid for the selected fork.""" + if not hasattr(config, "test_fork_class"): + return + + fork_class = config.test_fork_class + layer_module = config.layer_module # type: ignore[attr-defined] + get_fork_by_name = layer_module.forks.get_fork_by_name + verbose = config.getoption("verbose") + deselected = [] + selected = [] + + for item in items: + if not _is_test_item_valid_for_fork(item, fork_class, get_fork_by_name): + if verbose < 2: + deselected.append(item) + else: + selected.append(item) + else: + selected.append(item) + + if deselected: + items[:] = selected + config.hook.pytest_deselected(items=deselected) + + +def _is_test_item_valid_for_fork(item: pytest.Item, fork_class: Any, get_fork_by_name: Any) -> bool: + """Check if a test item is valid for the given fork based on validity markers.""" + markers = list(item.iter_markers()) + + has_valid_from = False + has_valid_until = False + has_valid_at = False + + valid_from_forks = [] + valid_until_forks = [] + valid_at_forks = [] + + for marker in markers: + if marker.name == "valid_from": + has_valid_from = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_from_forks.append(target_fork) + elif marker.name == "valid_until": + has_valid_until = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_until_forks.append(target_fork) + elif marker.name == "valid_at": + has_valid_at = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_at_forks.append(target_fork) + + if not (has_valid_from or has_valid_until or has_valid_at): + return True + + if has_valid_at: + return fork_class in valid_at_forks + + from_valid = True + if has_valid_from: + from_valid = any(fork_class >= from_fork for from_fork in valid_from_forks) + + until_valid = True + if has_valid_until: + until_valid = any(fork_class <= until_fork for until_fork in valid_until_forks) + + return from_valid and until_valid + + +def pytest_sessionfinish(session: pytest.Session, exitstatus: int) -> None: + """Write all collected fixtures at the end of the session.""" + if hasattr(session.config, "fixture_collector"): + session.config.fixture_collector.write_fixtures() + + +@pytest.hookimpl(tryfirst=True, hookwrapper=True) +def pytest_runtest_makereport(item: pytest.Item, call: pytest.CallInfo[None]) -> Any: + """Make each test's fixture json path available to the test report.""" + outcome = yield + report = outcome.get_result() + + if call.when == "call": + if hasattr(item.config, "fixture_path_absolute") and hasattr( + item.config, "fixture_path_relative" + ): + report.user_properties.append( + ("fixture_path_absolute", item.config.fixture_path_absolute) + ) + report.user_properties.append( + ("fixture_path_relative", item.config.fixture_path_relative) + ) + if hasattr(item.config, "fixture_format"): + report.user_properties.append(("fixture_format", item.config.fixture_format)) + + +@pytest.fixture +def fork(request: pytest.FixtureRequest) -> Any: + """Parametrize test cases by fork (dynamically loaded based on layer).""" + pass + + +@pytest.fixture +def test_case_description(request: pytest.FixtureRequest) -> str: + """Extract and combine docstrings from test class and function.""" + description_unavailable = ( + "No description available - add a docstring to the python test class or function." + ) + test_class_doc = "" + test_function_doc = "" + + if hasattr(request.node, "cls") and request.cls: + test_class_doc = f"Test class documentation:\n{request.cls.__doc__}" + if hasattr(request.node, "function") and request.function.__doc__: + test_function_doc = f"{request.function.__doc__}" + + if not test_class_doc and not test_function_doc: + return description_unavailable + + combined_docstring = f"{test_class_doc}\n\n{test_function_doc}".strip() + return combined_docstring + + +@pytest.fixture(scope="function") +def pre(request: pytest.FixtureRequest) -> Any: + """ + Default pre-state (layer-specific). + + Tests can request this fixture to customize the initial state, + or omit it to use the default (auto-injected by framework). + """ + layer = request.config.test_layer # type: ignore[attr-defined] + + if layer == "execution": + pytest.exit( + "Execution layer testing is not yet implemented. Use --layer=consensus (default).", + returncode=pytest.ExitCode.USAGE_ERROR, + ) + + layer_module = request.config.layer_module # type: ignore[attr-defined] + + if hasattr(request, "param"): + return layer_module.generate_pre_state(**request.param) + + return layer_module.generate_pre_state() + + +def base_spec_filler_parametrizer(fixture_class: Any) -> Any: + """ + Generate pytest.fixture for a given fixture class. + + Args: + fixture_class: The fixture class to create a parametrizer for. + + Returns: + A pytest fixture function that creates wrapper instances. + """ + + @pytest.fixture( + scope="function", + name=fixture_class.format_name, + ) + def base_spec_filler_parametrizer_func( + request: pytest.FixtureRequest, + fork: Any, + test_case_description: str, + pre: Any, # Auto-inject pre fixture + ) -> Any: + """Fixture used to instantiate an auto-fillable fixture object.""" + + class FixtureWrapper(fixture_class): # type: ignore[misc] + """Wrapper class that auto-fills and collects fixtures on instantiation.""" + + def __init__(self, **kwargs: Any) -> None: + # Auto-inject pre-state if not provided by test + if "pre" not in kwargs and "anchor_state" not in kwargs: + # Determine which field to inject based on fixture type + if hasattr(fixture_class, "__annotations__"): + if "pre" in fixture_class.__annotations__: + kwargs["pre"] = pre + elif "anchor_state" in fixture_class.__annotations__: + kwargs["anchor_state"] = pre + + super().__init__(**kwargs) + + filled_fixture = self.make_fixture() + filled_fixture.fill_info( + test_id=request.node.nodeid, + description=test_case_description, + fork=fork, + ) + + if hasattr(request.config, "fixture_collector"): + request.config.fixture_collector.add_fixture( + test_name=request.node.name, + fixture_format=filled_fixture.format_name, + fixture=filled_fixture, + test_nodeid=request.node.nodeid, + config=request.config, + ) + + return FixtureWrapper + + return base_spec_filler_parametrizer_func + + +def pytest_generate_tests(metafunc: pytest.Metafunc) -> None: + """Pytest hook to dynamically generate test cases for each fork.""" + if "fork" not in metafunc.fixturenames: + return + + fork_class = metafunc.config.test_fork_class # type: ignore[attr-defined] + layer_module = metafunc.config.layer_module # type: ignore[attr-defined] + get_fork_by_name = layer_module.forks.get_fork_by_name + + if not _is_test_valid_for_fork(metafunc, fork_class, get_fork_by_name): + verbose = metafunc.config.getoption("verbose") + if verbose >= 2: + metafunc.parametrize( + "fork", + [ + pytest.param( + None, + marks=pytest.mark.skip( + reason=f"Test not valid for fork {fork_class.name()}" + ), + ) + ], + scope="function", + ) + return + + metafunc.parametrize( + "fork", + [pytest.param(fork_class, id=f"fork_{fork_class.name()}")], + scope="function", + ) + + +def _is_test_valid_for_fork( + metafunc: pytest.Metafunc, fork_class: Any, get_fork_by_name: Any +) -> bool: + """Check if a test is valid for the given fork based on validity markers.""" + markers = list(metafunc.definition.iter_markers()) + + has_valid_from = False + has_valid_until = False + has_valid_at = False + + valid_from_forks = [] + valid_until_forks = [] + valid_at_forks = [] + + for marker in markers: + if marker.name == "valid_from": + has_valid_from = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_from_forks.append(target_fork) + elif marker.name == "valid_until": + has_valid_until = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_until_forks.append(target_fork) + elif marker.name == "valid_at": + has_valid_at = True + for fork_name in marker.args: + target_fork = get_fork_by_name(fork_name) + if target_fork: + valid_at_forks.append(target_fork) + + if not (has_valid_from or has_valid_until or has_valid_at): + return True + + if has_valid_at: + return fork_class in valid_at_forks + + from_valid = True + if has_valid_from: + from_valid = any(fork_class >= from_fork for from_fork in valid_from_forks) + + until_valid = True + if has_valid_until: + until_valid = any(fork_class <= until_fork for until_fork in valid_until_forks) + + return from_valid and until_valid + + +def _register_layer_fixtures(config: pytest.Config, layer: str) -> None: + """Register layer-specific test fixture formats during configuration.""" + try: + # Import the test_fixtures module + fixtures_module = importlib.import_module(f"{layer}_testing.test_fixtures") + + # Get the base fixture class based on layer + if layer == "consensus": + base_fixture_class = fixtures_module.BaseConsensusFixture + elif layer == "execution": + base_fixture_class = fixtures_module.BaseExecutionFixture + else: + return + + # Register all fixture formats globally so pytest can discover them + # This must happen during pytest_configure, before fixture discovery + for format_name, fixture_class in base_fixture_class.formats.items(): + fixture_func = base_spec_filler_parametrizer(fixture_class) + # Add to module globals so pytest can discover them + globals()[format_name] = fixture_func + except (ImportError, AttributeError) as e: + pytest.exit( + f"Failed to load {layer} layer test fixtures: {e}", + returncode=pytest.ExitCode.USAGE_ERROR, + ) diff --git a/packages/testing/src/framework/test_fixtures/__init__.py b/packages/testing/src/framework/test_fixtures/__init__.py new file mode 100644 index 00000000..45ad69b4 --- /dev/null +++ b/packages/testing/src/framework/test_fixtures/__init__.py @@ -0,0 +1,7 @@ +"""Base fixture infrastructure for Ethereum testing.""" + +from framework.test_fixtures.base import BaseFixture + +__all__ = [ + "BaseFixture", +] diff --git a/packages/testing/src/framework/test_fixtures/base.py b/packages/testing/src/framework/test_fixtures/base.py new file mode 100644 index 00000000..fb33e6ba --- /dev/null +++ b/packages/testing/src/framework/test_fixtures/base.py @@ -0,0 +1,141 @@ +"""Base fixture definitions for Ethereum test formats.""" + +import hashlib +import json +from functools import cached_property +from typing import Any, ClassVar, Dict, Type + +from pydantic import Field + +from framework.base_types import CamelModel +from framework.forks import BaseFork + + +class BaseFixture(CamelModel): + """ + Base class for all Ethereum test fixtures (consensus and execution layers). + + Provides: + - Auto-registration of fixture formats + - JSON serialization with custom encoders + - Hash generation for fixtures + - Common metadata handling + + This base class is layer-agnostic and can be used for both consensus + and execution layer fixtures. + """ + + # Class-level registry of all fixture formats + formats: ClassVar[Dict[str, Type["BaseFixture"]]] = {} + + # Fixture format metadata + format_name: ClassVar[str] = "" + """The name of this fixture format (e.g., 'state_transition_test').""" + + description: ClassVar[str] = "Unknown fixture format" + """Human-readable description of what this fixture tests.""" + + # Instance fields + network: str | None = None + """The fork/network this fixture is valid for (e.g., 'Devnet', 'Shanghai').""" + + info: Dict[str, Any] = Field(default_factory=dict, alias="_info") + """Metadata about the test (description, fork, etc.).""" + + @classmethod + def __pydantic_init_subclass__(cls, **kwargs: Any) -> None: + """ + Auto-register fixture formats when subclasses are defined. + + This hook is called automatically when a new subclass is created. + If the subclass defines a `format_name`, it will be registered in + the `formats` dictionary for later lookup. + """ + super().__pydantic_init_subclass__(**kwargs) + if cls.format_name: + BaseFixture.formats[cls.format_name] = cls + + @cached_property + def json_dict(self) -> Dict[str, Any]: + """ + Return the JSON representation of the fixture. + + Excludes the `info` field and converts snake_case to camelCase. + """ + return self.model_dump( + mode="json", + by_alias=True, + exclude_none=True, + exclude={"info"}, + ) + + @cached_property + def hash(self) -> str: + """ + Generate a deterministic hash for this fixture. + + The hash is computed from the JSON representation to ensure + consistency across runs. + """ + json_str = json.dumps( + self.json_dict, + sort_keys=True, + separators=(",", ":"), + ) + h = hashlib.sha256(json_str.encode("utf-8")).hexdigest() + return f"0x{h}" + + def json_dict_with_info(self, hash_only: bool = False) -> Dict[str, Any]: + """ + Return JSON representation with the info field included. + + Args: + hash_only: If True, only include the hash in _info. + + Returns: + Dictionary ready for JSON serialization. + """ + dict_with_info = self.json_dict.copy() + dict_with_info["_info"] = {"hash": self.hash} + if not hash_only: + dict_with_info["_info"].update(self.info) + return dict_with_info + + def fill_info( + self, + test_id: str, + description: str, + fork: BaseFork, + ) -> None: + """ + Fill metadata information for this fixture. + + Args: + test_id: Unique identifier for the test case. + description: Human-readable description of the test. + fork: The fork this test is valid for. + """ + if "comment" not in self.info: + self.info["comment"] = "`leanSpec` generated test" + self.info["test-id"] = test_id + self.info["description"] = description + self.info["fixture-format"] = self.format_name + + # Set network field on the fixture itself + self.network = fork.name() + + @classmethod + def supports_fork(cls, fork: str) -> bool: + """ + Check if this fixture format supports the given fork. + + By default, all fixtures support all forks. Override in subclasses + to restrict to specific forks. + + Args: + fork: The fork name (e.g., "devnet", "shanghai"). + + Returns: + True if the fixture supports this fork. + """ + return True diff --git a/pyproject.toml b/pyproject.toml index 34a19d10..7aa2a568 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -89,8 +89,15 @@ addopts = [ "--cov-report=term-missing", "--cov-report=html", "--cov-branch", + # Exclude fixture generation tests from regular test runs + # These are only run via the 'fill' command + "--ignore=tests/consensus", + "--ignore=tests/execution", +] +markers = [ + "slow: marks tests as slow (deselect with '-m \"not slow\"')", + "valid_until: marks tests as valid until a specific fork version", ] -markers = ["slow: marks tests as slow (deselect with '-m \"not slow\"')"] [tool.coverage.run] source = ["src"] @@ -100,6 +107,15 @@ branch = true number = true wrap = 80 +[tool.uv] +required-version = ">=0.7.0" + +[tool.uv.workspace] +members = ["packages/*"] + +[tool.uv.sources] +lean-ethereum-testing = { workspace = true } + [dependency-groups] dev = [ # debugging / convenience diff --git a/pytest-fill.ini b/pytest-fill.ini new file mode 100644 index 00000000..0275c220 --- /dev/null +++ b/pytest-fill.ini @@ -0,0 +1,27 @@ +[pytest] +# Configuration for fill command + +# Search for layer-specific tests +# The actual testpath will be determined dynamically by the --layer flag +# in the pytest plugin +testpaths = tests + +# Load pytest plugins +addopts = + -p framework.pytest_plugins.filler + # Show shorter tracebacks + --tb=short + # Disable coverage for fixture generation + --no-cov + +# Test discovery +python_files = test_*.py +python_classes = Test* +python_functions = test_* + +# Markers +markers = + slow: marks tests as slow (deselect with '-m "not slow"') + +# Minimal output for filling +console_output_style = classic diff --git a/src/lean_spec/subspecs/forkchoice/helpers.py b/src/lean_spec/subspecs/forkchoice/helpers.py index 06c1bd2a..ee4ef291 100644 --- a/src/lean_spec/subspecs/forkchoice/helpers.py +++ b/src/lean_spec/subspecs/forkchoice/helpers.py @@ -39,10 +39,6 @@ def get_fork_choice_head( if root == ZERO_HASH: root = min(blocks.keys(), key=lambda block_hash: blocks[block_hash].slot) - # If no votes, return the starting root immediately - if not latest_votes: - return root - # Count votes for each block (votes for descendants count for ancestors) vote_weights: Dict[Bytes32, int] = {} @@ -55,21 +51,24 @@ def get_fork_choice_head( vote_weights[block_hash] = vote_weights.get(block_hash, 0) + 1 block_hash = blocks[block_hash].parent_root - # Build children mapping for blocks above min score + # Build children mapping for ALL blocks (not just those above min_score) + # This ensures fork choice works even when there are no votes children_map: Dict[Bytes32, list[Bytes32]] = {} for block_hash, block in blocks.items(): - if block.parent_root and vote_weights.get(block_hash, 0) >= min_score: - children_map.setdefault(block.parent_root, []).append(block_hash) + if block.parent_root: + # Only include blocks that have enough votes OR when min_score is 0 + if min_score == 0 or vote_weights.get(block_hash, 0) >= min_score: + children_map.setdefault(block.parent_root, []).append(block_hash) - # Walk down tree, choosing child with most votes (tiebreak by slot, then hash) + # Walk down tree, choosing child with most votes (tiebreak by lexicographic hash) current = root while True: children = children_map.get(current, []) if not children: return current - # Choose best child: most votes, then highest slot, then highest hash - current = max(children, key=lambda x: (vote_weights.get(x, 0), blocks[x].slot, x)) + # Choose best child: most votes, then lexicographically highest hash + current = max(children, key=lambda x: (vote_weights.get(x, 0), x)) def get_latest_justified(states: Dict[Bytes32, "State"]) -> Optional[Checkpoint]: diff --git a/src/lean_spec/subspecs/forkchoice/store.py b/src/lean_spec/subspecs/forkchoice/store.py index 6637c00c..ec699688 100644 --- a/src/lean_spec/subspecs/forkchoice/store.py +++ b/src/lean_spec/subspecs/forkchoice/store.py @@ -4,6 +4,13 @@ The Store tracks all information required for the LMD GHOST forkchoice algorithm. """ +__all__ = [ + "Store", + "SECONDS_PER_SLOT", + "SECONDS_PER_INTERVAL", + "INTERVALS_PER_SLOT", +] + import copy from typing import Dict @@ -103,13 +110,17 @@ class method acts as a factory for creating a new Store instance. anchor_root = hash_tree_root(anchor_block) anchor_slot = anchor_block.slot + # Create checkpoint from anchor block + # The anchor block becomes the initial justified and finalized checkpoint + anchor_checkpoint = Checkpoint(root=anchor_root, slot=anchor_slot) + return cls( time=Uint64(anchor_slot * INTERVALS_PER_SLOT), config=state.config, head=anchor_root, safe_target=anchor_root, - latest_justified=state.latest_justified, - latest_finalized=state.latest_finalized, + latest_justified=anchor_checkpoint, + latest_finalized=anchor_checkpoint, blocks={anchor_root: copy.copy(anchor_block)}, states={anchor_root: copy.copy(state)}, ) diff --git a/src/lean_spec/types/byte_arrays.py b/src/lean_spec/types/byte_arrays.py index 8e82a47d..2332fb50 100644 --- a/src/lean_spec/types/byte_arrays.py +++ b/src/lean_spec/types/byte_arrays.py @@ -11,7 +11,7 @@ from typing import IO, Any, ClassVar, Iterable, SupportsIndex -from pydantic import Field, field_validator +from pydantic import Field, field_serializer, field_validator from pydantic.annotated_handlers import GetCoreSchemaHandler from pydantic_core import core_schema from typing_extensions import Self @@ -170,7 +170,9 @@ def __get_pydantic_core_schema__( # Case 2: The value needs to be parsed and validated. python_schema, ], - serialization=core_schema.plain_serializer_function_ser_schema(lambda x: x.hex()), + serialization=core_schema.plain_serializer_function_ser_schema( + lambda x: "0x" + x.hex() + ), ) def __repr__(self) -> str: @@ -264,6 +266,11 @@ def _validate_byte_list_data(cls, v: Any) -> bytes: raise ValueError(f"ByteList[{cls.LIMIT}] length {len(b)} exceeds limit {cls.LIMIT}") return b + @field_serializer("data", when_used="json") + def _serialize_data(self, value: bytes) -> str: + """Serialize bytes to 0x-prefixed hex string for JSON.""" + return "0x" + value.hex() + @classmethod def is_fixed_size(cls) -> bool: """ByteList is variable-size (length depends on the value).""" diff --git a/src/lean_spec/types/collections.py b/src/lean_spec/types/collections.py index 330de747..f297acbe 100644 --- a/src/lean_spec/types/collections.py +++ b/src/lean_spec/types/collections.py @@ -12,11 +12,12 @@ cast, ) -from pydantic import Field, field_validator +from pydantic import Field, field_serializer, field_validator from typing_extensions import Self from lean_spec.types.constants import OFFSET_BYTE_LENGTH +from .byte_arrays import BaseBytes from .ssz_base import SSZModel, SSZType from .uint import Uint32 @@ -176,6 +177,20 @@ class Uint64List32(SSZList): data: Tuple[SSZType, ...] = Field(default_factory=tuple) """The elements in this list, stored as an immutable tuple.""" + @field_serializer("data", when_used="json") + def _serialize_data(self, value: Tuple[SSZType, ...]) -> list[Any]: + """Serialize list elements to JSON, preserving custom type serialization.""" + result: list[Any] = [] + for item in value: + # For BaseBytes subclasses, manually add 0x prefix + if isinstance(item, BaseBytes): + result.append("0x" + item.hex()) + else: + # For other types (Uint, etc.), convert to int + # BaseUint inherits from int, so this cast is safe + result.append(item) + return result + @field_validator("data", mode="before") @classmethod def _validate_list_data(cls, v: Any) -> Tuple[SSZType, ...]: diff --git a/tests/consensus/devnet/fc_examples/test_head_selection.py b/tests/consensus/devnet/fc_examples/test_head_selection.py new file mode 100644 index 00000000..956eb10c --- /dev/null +++ b/tests/consensus/devnet/fc_examples/test_head_selection.py @@ -0,0 +1,49 @@ +"""Fork choice head selection tests for the devnet fork.""" + +import pytest +from consensus_testing import BlockSpec, BlockStep, ForkChoiceTestFiller, StoreChecks + +from lean_spec.subspecs.containers.slot import Slot + +pytestmark = pytest.mark.valid_until("Devnet") + + +def test_head_updates_after_single_block( + fork_choice_test: ForkChoiceTestFiller, +) -> None: + """ + Test that head updates correctly after processing a single block. + + With no attestations, fork choice should select the latest block + on the canonical chain. + """ + fork_choice_test( + steps=[ + BlockStep( + block=BlockSpec(slot=Slot(1)), + checks=StoreChecks(head_slot=Slot(1)), + ), + ], + ) + + +def test_head_advances_with_sequential_blocks( + fork_choice_test: ForkChoiceTestFiller, +) -> None: + """ + Test head selection advances through sequential blocks. + + Each new block should become the new head since there are no forks. + """ + fork_choice_test( + steps=[ + BlockStep( + block=BlockSpec(slot=Slot(1)), + checks=StoreChecks(head_slot=Slot(1)), + ), + BlockStep( + block=BlockSpec(slot=Slot(2)), + checks=StoreChecks(head_slot=Slot(2)), + ), + ], + ) diff --git a/tests/consensus/devnet/stf_examples/test_blocks.py b/tests/consensus/devnet/stf_examples/test_blocks.py new file mode 100644 index 00000000..b53cdcec --- /dev/null +++ b/tests/consensus/devnet/stf_examples/test_blocks.py @@ -0,0 +1,56 @@ +"""Single block processing tests for the devnet fork.""" + +import pytest +from consensus_testing import BlockSpec, StateExpectation, StateTransitionTestFiller + +from lean_spec.subspecs.containers.slot import Slot +from lean_spec.subspecs.containers.state import State, Validators +from lean_spec.subspecs.containers.validator import Validator +from lean_spec.types import Bytes52, Uint64 + +pytestmark = pytest.mark.valid_until("Devnet") + + +def test_single_empty_block(state_transition_test: StateTransitionTestFiller) -> None: + """ + Test processing a single empty block (no attestations). + + This is the simplest possible block processing test. + Uses default pre-state (auto-injected). + """ + # Pre-state is auto-injected - no need to pass it explicitly + state_transition_test( + blocks=[BlockSpec(slot=Slot(1))], + post=StateExpectation( + slot=Slot(1), + ), + ) + + +def test_single_block_with_slot_gap( + state_transition_test: StateTransitionTestFiller, +) -> None: + """Test processing a block with empty slots before it. Uses default pre-state.""" + state_transition_test( + blocks=[BlockSpec(slot=Slot(5))], # Skip slots 1-4 + post=StateExpectation( + slot=Slot(5), + ), + ) + + +def test_sequential_blocks( + state_transition_test: StateTransitionTestFiller, +) -> None: + """Test processing a sequence of blocks in consecutive slots. Uses default pre-state.""" + state_transition_test( + blocks=[ + BlockSpec(slot=Slot(1)), + BlockSpec(slot=Slot(2)), + BlockSpec(slot=Slot(3)), + ], + post=StateExpectation( + slot=Slot(3), + validator_count=4, + ), + ) diff --git a/tests/consensus/devnet/stf_examples/test_invalid.py b/tests/consensus/devnet/stf_examples/test_invalid.py new file mode 100644 index 00000000..76d31289 --- /dev/null +++ b/tests/consensus/devnet/stf_examples/test_invalid.py @@ -0,0 +1,54 @@ +"""Invalid block processing tests for the devnet fork.""" + +import pytest +from consensus_testing import BlockSpec, StateTransitionTestFiller + +from lean_spec.subspecs.containers.slot import Slot +from lean_spec.subspecs.containers.state import State, Validators +from lean_spec.subspecs.containers.validator import Validator +from lean_spec.types import Bytes52, Uint64, ValidatorIndex + +pytestmark = pytest.mark.valid_until("Devnet") + + +@pytest.fixture +def pre() -> State: + """ + Custom pre-state for invalid proposer test. + + This demonstrates how to override the default pre fixture + to provide custom initial state for specific tests. + """ + validators = Validators(data=[Validator(pubkey=Bytes52.zero()) for _ in range(4)]) + return State.generate_genesis( + genesis_time=Uint64(1000000), + validators=validators, + ) + + +def test_invalid_proposer( + state_transition_test: StateTransitionTestFiller, + pre: State, +) -> None: + """ + Test that blocks with incorrect proposer are rejected. + + The proposer index must match the round-robin selection for that slot. + This test demonstrates customizing the pre-state via fixture override. + """ + # For slot 1, the correct proposer is: 1 % 4 = 1 + # create a block spec with wrong proposer (index 2) + wrong_proposer = ValidatorIndex(2) + + # Use BlockSpec with wrong proposer + invalid_block_spec = BlockSpec( + slot=Slot(1), + proposer_index=wrong_proposer, + ) + + # This should fail with "Incorrect block proposer" + state_transition_test( + pre=pre, + blocks=[invalid_block_spec], + expect_exception=AssertionError, + ) diff --git a/tests/lean_spec/subspecs/forkchoice/test_fork_choice_algorithm.py b/tests/lean_spec/subspecs/forkchoice/test_fork_choice_algorithm.py index 39c37aa7..db9a65bb 100644 --- a/tests/lean_spec/subspecs/forkchoice/test_fork_choice_algorithm.py +++ b/tests/lean_spec/subspecs/forkchoice/test_fork_choice_algorithm.py @@ -61,8 +61,14 @@ class TestLMDGHOSTAlgorithm: """Test the core LMD GHOST fork choice algorithm.""" def test_fork_choice_no_votes(self, sample_blocks: Dict[Bytes32, Block]) -> None: - """Test fork choice algorithm with no votes returns the root.""" + """ + Test fork choice algorithm with no votes walks to the leaf. + + With no votes, fork choice should walk down the tree and select the + leaf block (the furthest descendant), breaking ties by lexicographic hash. + """ root_hash = list(sample_blocks.keys())[0] + leaf_hash = list(sample_blocks.keys())[2] # block_b (slot 2, the leaf) head = get_fork_choice_head( blocks=sample_blocks, @@ -71,7 +77,7 @@ def test_fork_choice_no_votes(self, sample_blocks: Dict[Bytes32, Block]) -> None min_score=0, ) - assert head == root_hash + assert head == leaf_hash def test_fork_choice_single_vote(self, sample_blocks: Dict[Bytes32, Block]) -> None: """Test fork choice algorithm with a single vote.""" @@ -274,7 +280,7 @@ def test_fork_choice_tie_breaking(self) -> None: block_b_hash: block_b, } - # No votes - algorithm returns the starting root (genesis) + # No votes - algorithm breaks tie by lexicographically highest hash head = get_fork_choice_head( blocks=blocks, root=genesis_hash, @@ -282,8 +288,9 @@ def test_fork_choice_tie_breaking(self) -> None: min_score=0, ) - # Should return the genesis block when no votes exist - assert head == genesis_hash + # Should return the block with lexicographically highest hash + expected_head = max(block_a_hash, block_b_hash) + assert head == expected_head def test_fork_choice_deep_chain(self) -> None: """Test fork choice algorithm with a deeper chain.""" diff --git a/tests/lean_spec/subspecs/forkchoice/test_helpers.py b/tests/lean_spec/subspecs/forkchoice/test_helpers.py index 91777ba4..62f579a3 100644 --- a/tests/lean_spec/subspecs/forkchoice/test_helpers.py +++ b/tests/lean_spec/subspecs/forkchoice/test_helpers.py @@ -80,14 +80,15 @@ def test_get_fork_choice_head_with_votes(self, sample_blocks: Dict[Bytes32, Bloc assert head == target_hash def test_get_fork_choice_head_no_votes(self, sample_blocks: Dict[Bytes32, Block]) -> None: - """Test get_fork_choice_head with no votes returns the root.""" + """Test get_fork_choice_head with no votes walks to the leaf.""" root_hash = list(sample_blocks.keys())[0] + leaf_hash = list(sample_blocks.keys())[2] # block_b is the leaf head = get_fork_choice_head( blocks=sample_blocks, root=root_hash, latest_votes={}, min_score=0 ) - assert head == root_hash + assert head == leaf_hash def test_get_fork_choice_head_with_min_score(self, sample_blocks: Dict[Bytes32, Block]) -> None: """Test get_fork_choice_head respects minimum score.""" diff --git a/tests/lean_spec/subspecs/forkchoice/test_store_lifecycle.py b/tests/lean_spec/subspecs/forkchoice/test_store_lifecycle.py index 5556f071..e5a3c749 100644 --- a/tests/lean_spec/subspecs/forkchoice/test_store_lifecycle.py +++ b/tests/lean_spec/subspecs/forkchoice/test_store_lifecycle.py @@ -151,11 +151,13 @@ def test_store_factory_method(self) -> None: # Verify initialization anchor_root = hash_tree_root(anchor_block) + anchor_checkpoint = Checkpoint(root=anchor_root, slot=Slot(0)) assert store.config == state.config assert store.head == anchor_root assert store.safe_target == anchor_root - assert store.latest_justified == state.latest_justified - assert store.latest_finalized == state.latest_finalized + # Store uses anchor checkpoint, not state's checkpoint + assert store.latest_justified == anchor_checkpoint + assert store.latest_finalized == anchor_checkpoint assert anchor_root in store.blocks assert anchor_root in store.states diff --git a/tests/lean_spec/subspecs/forkchoice/test_validator.py b/tests/lean_spec/subspecs/forkchoice/test_validator.py index c17e2d68..6dccb400 100644 --- a/tests/lean_spec/subspecs/forkchoice/test_validator.py +++ b/tests/lean_spec/subspecs/forkchoice/test_validator.py @@ -438,7 +438,8 @@ def test_multiple_validators_coordination(self, sample_store: Store) -> None: assert att.data.source.root == first_att.data.source.root # Validator 2 produces next block for slot 2 - # Without votes for block1, this will build on genesis (current head) + # After processing block1, head should be block1 (fork choice walks the tree) + # So block2 will build on block1 block2, _signatures2 = sample_store.produce_block_with_signatures( Slot(2), ValidatorIndex(2), @@ -453,10 +454,18 @@ def test_multiple_validators_coordination(self, sample_store: Store) -> None: assert block1_hash in sample_store.blocks assert block2_hash in sample_store.blocks - # Both blocks should build on genesis (the current head) - genesis_hash = sample_store.head + # block1 builds on genesis, block2 builds on block1 (current head) + # Get the original genesis hash from the store's blocks + genesis_hash = min( + ( + root + for root in sample_store.blocks.keys() + if sample_store.blocks[root].slot == Slot(0) + ), + key=lambda root: root, + ) assert block1.parent_root == genesis_hash - assert block2.parent_root == genesis_hash + assert block2.parent_root == block1_hash def test_validator_edge_cases(self, sample_store: Store) -> None: """Test edge cases in validator operations.""" diff --git a/tests/lean_spec/types/test_byte_arrays.py b/tests/lean_spec/types/test_byte_arrays.py index 43826bde..d578b2dc 100644 --- a/tests/lean_spec/types/test_byte_arrays.py +++ b/tests/lean_spec/types/test_byte_arrays.py @@ -244,8 +244,8 @@ def test_pydantic_accepts_various_inputs_for_vectors() -> None: # serializer returns string representation in model_dump() dumped = m.model_dump() assert isinstance(dumped["root"], str) - assert dumped["root"] == "11" * 32 # hex string without 0x prefix - assert dumped["key"] == "00010203" # hex string representation + assert dumped["root"] == "0x" + "11" * 32 + assert dumped["key"] == "0x00010203" def test_pydantic_validates_vector_lengths() -> None: diff --git a/tox.ini b/tox.ini index 085d6f6b..770412b8 100644 --- a/tox.ini +++ b/tox.ini @@ -1,7 +1,7 @@ [tox] min_version = 4.0 requires = - tox >=4.23.0,<5 + tox >=4.32.0,<5 tox-uv >=1.29 env_list = all-checks @@ -14,6 +14,7 @@ runner = uv-venv-lock-runner basepython = python3.14 uv_python = >=3.12 dependency_groups = dev +package = editable [testenv:all-checks] description = Run all quality checks (lint, typecheck, spellcheck, mdformat) @@ -26,22 +27,22 @@ commands = [testenv:lint] description = Lint and code formatting checks (ruff) commands = - ruff check --no-fix --show-fixes src tests - ruff format --check src tests + ruff check --no-fix --show-fixes src tests packages + ruff format --check src tests packages [testenv:fix] description = Auto-fix linting and formatting issues (ruff) commands = - ruff check --fix src tests - ruff format src tests + ruff check --fix src tests packages + ruff format src tests packages [testenv:typecheck] description = Run type checking (mypy) -commands = mypy src tests +commands = mypy src tests packages [testenv:spellcheck] description = Run spell checking (codespell) -commands = codespell src tests docs README.md CLAUDE.md --skip="*.lock,*.svg,.git,__pycache__,.mypy_cache,.pytest_cache" --ignore-words=.codespell-ignore-words.txt +commands = codespell src tests packages docs README.md CLAUDE.md --skip="*.lock,*.svg,.git,__pycache__,.mypy_cache,.pytest_cache" --ignore-words=.codespell-ignore-words.txt [testenv:mdformat] description = Check markdown formatting for docs (mdformat) diff --git a/uv.lock b/uv.lock index 3e0ac560..17903f98 100644 --- a/uv.lock +++ b/uv.lock @@ -2,6 +2,12 @@ version = 1 revision = 3 requires-python = ">=3.12" +[manifest] +members = [ + "lean-ethereum-testing", + "lean-spec", +] + [[package]] name = "annotated-types" version = "0.7.0" @@ -414,15 +420,15 @@ wheels = [ [[package]] name = "hypothesis" -version = "6.142.3" +version = "6.142.4" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "attrs" }, { name = "sortedcontainers" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/e8/c9/03b5177dcd0224338c9ef63890bc52c0b0fbc86fba7c2c8a8523c0f02833/hypothesis-6.142.3.tar.gz", hash = "sha256:f1aaf83f6cc0c50f1b61e167974a8a67377dce13e0ea628b67a83f574ef30b85", size = 466042, upload-time = "2025-10-22T19:22:16.689Z" } +sdist = { url = "https://files.pythonhosted.org/packages/47/0b/76a062d1d6cd68342b460c2f5627e1ad1102a3dd781acd5c096c75aca0d6/hypothesis-6.142.4.tar.gz", hash = "sha256:b3e71a84708994aa910ea47f1483ad892a7c390839959d689b2a2b07ebfd160e", size = 466047, upload-time = "2025-10-25T16:19:03.838Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/28/42/7422624c9079865a094e3e13014ecf21f07f07b190df09e1feaaaa687891/hypothesis-6.142.3-py3-none-any.whl", hash = "sha256:2fc19a2824c9bdc3f8e39d87861fbdf1d766982b20d54646a642bce82bcac179", size = 533464, upload-time = "2025-10-22T19:22:13.051Z" }, + { url = "https://files.pythonhosted.org/packages/3e/9f/8010f93e175ecd996f54df9019ee8c58025fc21ed47658b0a58dd25ebe8b/hypothesis-6.142.4-py3-none-any.whl", hash = "sha256:25eecc73fadecd8b491aed822204cfe4be9c98ff5c1e8e038d181136ffc54b5b", size = 533467, upload-time = "2025-10-25T16:19:00.443Z" }, ] [[package]] @@ -571,6 +577,38 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d3/32/da7f44bcb1105d3e88a0b74ebdca50c59121d2ddf71c9e34ba47df7f3a56/keyring-25.6.0-py3-none-any.whl", hash = "sha256:552a3f7af126ece7ed5c89753650eec89c7eaae8617d0aa4d9ad2b75111266bd", size = 39085, upload-time = "2024-12-25T15:26:44.377Z" }, ] +[[package]] +name = "lean-ethereum-testing" +version = "0.0.1" +source = { editable = "packages/testing" } +dependencies = [ + { name = "click" }, + { name = "lean-spec" }, + { name = "pydantic" }, + { name = "pytest" }, +] + +[package.optional-dependencies] +lint = [ + { name = "mypy" }, + { name = "ruff" }, +] +test = [ + { name = "pytest-cov" }, +] + +[package.metadata] +requires-dist = [ + { name = "click", specifier = ">=8.1.0,<9" }, + { name = "lean-spec", editable = "." }, + { name = "mypy", marker = "extra == 'lint'", specifier = ">=1.15.0,<1.16" }, + { name = "pydantic", specifier = ">=2.12.0,<3" }, + { name = "pytest", specifier = ">=8.3.3,<9" }, + { name = "pytest-cov", marker = "extra == 'test'", specifier = ">=6.0.0,<7" }, + { name = "ruff", marker = "extra == 'lint'", specifier = ">=0.11.8,<1" }, +] +provides-extras = ["test", "lint"] + [[package]] name = "lean-spec" version = "0.0.1" @@ -1343,7 +1381,7 @@ wheels = [ [[package]] name = "pyspelling" -version = "2.11" +version = "2.12" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "beautifulsoup4" }, @@ -1354,9 +1392,9 @@ dependencies = [ { name = "soupsieve" }, { name = "wcmatch" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/67/94/7975a04dd17c815b45f5f75fa07a49770b5f230320672dea155ea0a3ca14/pyspelling-2.11.tar.gz", hash = "sha256:94cc6efa979c26779601ad666f8d986adf52d247b313337ad67aac7163749d0e", size = 149444, upload-time = "2025-08-27T15:37:59.626Z" } +sdist = { url = "https://files.pythonhosted.org/packages/d2/8d/10c7685389449464172ff4383d9f1b6b96df8825ea6b513004a713aa034e/pyspelling-2.12.tar.gz", hash = "sha256:7b397911e46b7fa7c1056b2867c02e81547fc8d00bbcd84465655df23e49dbaa", size = 149587, upload-time = "2025-10-27T19:01:06.071Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/53/3d/8e0b77306de02d45ce8dec40fc999c8723e4c9735c5576db0f2026c63bab/pyspelling-2.11-py3-none-any.whl", hash = "sha256:2690a233131e7d6c3a3d47b15beb1452826b3b0702d5f241a2bcbec0102f3893", size = 45362, upload-time = "2025-08-27T15:37:58.054Z" }, + { url = "https://files.pythonhosted.org/packages/63/f9/40abc66ba7a74c54733d371904cb7807dcece67fd772f36140d9eff21dcd/pyspelling-2.12-py3-none-any.whl", hash = "sha256:4ef2a440ff582b85ab73032def3dc4592b2d29c6c884176a3b9ba5538968939b", size = 45432, upload-time = "2025-10-27T19:01:03.807Z" }, ] [[package]] @@ -1546,28 +1584,28 @@ wheels = [ [[package]] name = "ruff" -version = "0.14.1" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/9e/58/6ca66896635352812de66f71cdf9ff86b3a4f79071ca5730088c0cd0fc8d/ruff-0.14.1.tar.gz", hash = "sha256:1dd86253060c4772867c61791588627320abcb6ed1577a90ef432ee319729b69", size = 5513429, upload-time = "2025-10-16T18:05:41.766Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8d/39/9cc5ab181478d7a18adc1c1e051a84ee02bec94eb9bdfd35643d7c74ca31/ruff-0.14.1-py3-none-linux_armv6l.whl", hash = "sha256:083bfc1f30f4a391ae09c6f4f99d83074416b471775b59288956f5bc18e82f8b", size = 12445415, upload-time = "2025-10-16T18:04:48.227Z" }, - { url = "https://files.pythonhosted.org/packages/ef/2e/1226961855ccd697255988f5a2474890ac7c5863b080b15bd038df820818/ruff-0.14.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:f6fa757cd717f791009f7669fefb09121cc5f7d9bd0ef211371fad68c2b8b224", size = 12784267, upload-time = "2025-10-16T18:04:52.515Z" }, - { url = "https://files.pythonhosted.org/packages/c1/ea/fd9e95863124ed159cd0667ec98449ae461de94acda7101f1acb6066da00/ruff-0.14.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d6191903d39ac156921398e9c86b7354d15e3c93772e7dbf26c9fcae59ceccd5", size = 11781872, upload-time = "2025-10-16T18:04:55.396Z" }, - { url = "https://files.pythonhosted.org/packages/1e/5a/e890f7338ff537dba4589a5e02c51baa63020acfb7c8cbbaea4831562c96/ruff-0.14.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed04f0e04f7a4587244e5c9d7df50e6b5bf2705d75059f409a6421c593a35896", size = 12226558, upload-time = "2025-10-16T18:04:58.166Z" }, - { url = "https://files.pythonhosted.org/packages/a6/7a/8ab5c3377f5bf31e167b73651841217542bcc7aa1c19e83030835cc25204/ruff-0.14.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5c9e6cf6cd4acae0febbce29497accd3632fe2025c0c583c8b87e8dbdeae5f61", size = 12187898, upload-time = "2025-10-16T18:05:01.455Z" }, - { url = "https://files.pythonhosted.org/packages/48/8d/ba7c33aa55406955fc124e62c8259791c3d42e3075a71710fdff9375134f/ruff-0.14.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6fa2458527794ecdfbe45f654e42c61f2503a230545a91af839653a0a93dbc6", size = 12939168, upload-time = "2025-10-16T18:05:04.397Z" }, - { url = "https://files.pythonhosted.org/packages/b4/c2/70783f612b50f66d083380e68cbd1696739d88e9b4f6164230375532c637/ruff-0.14.1-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:39f1c392244e338b21d42ab29b8a6392a722c5090032eb49bb4d6defcdb34345", size = 14386942, upload-time = "2025-10-16T18:05:07.102Z" }, - { url = "https://files.pythonhosted.org/packages/48/44/cd7abb9c776b66d332119d67f96acf15830d120f5b884598a36d9d3f4d83/ruff-0.14.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7382fa12a26cce1f95070ce450946bec357727aaa428983036362579eadcc5cf", size = 13990622, upload-time = "2025-10-16T18:05:09.882Z" }, - { url = "https://files.pythonhosted.org/packages/eb/56/4259b696db12ac152fe472764b4f78bbdd9b477afd9bc3a6d53c01300b37/ruff-0.14.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd0bf2be3ae8521e1093a487c4aa3b455882f139787770698530d28ed3fbb37c", size = 13431143, upload-time = "2025-10-16T18:05:13.46Z" }, - { url = "https://files.pythonhosted.org/packages/e0/35/266a80d0eb97bd224b3265b9437bd89dde0dcf4faf299db1212e81824e7e/ruff-0.14.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cabcaa9ccf8089fb4fdb78d17cc0e28241520f50f4c2e88cb6261ed083d85151", size = 13132844, upload-time = "2025-10-16T18:05:16.1Z" }, - { url = "https://files.pythonhosted.org/packages/65/6e/d31ce218acc11a8d91ef208e002a31acf315061a85132f94f3df7a252b18/ruff-0.14.1-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:747d583400f6125ec11a4c14d1c8474bf75d8b419ad22a111a537ec1a952d192", size = 13401241, upload-time = "2025-10-16T18:05:19.395Z" }, - { url = "https://files.pythonhosted.org/packages/9f/b5/dbc4221bf0b03774b3b2f0d47f39e848d30664157c15b965a14d890637d2/ruff-0.14.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5a6e74c0efd78515a1d13acbfe6c90f0f5bd822aa56b4a6d43a9ffb2ae6e56cd", size = 12132476, upload-time = "2025-10-16T18:05:22.163Z" }, - { url = "https://files.pythonhosted.org/packages/98/4b/ac99194e790ccd092d6a8b5f341f34b6e597d698e3077c032c502d75ea84/ruff-0.14.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:0ea6a864d2fb41a4b6d5b456ed164302a0d96f4daac630aeba829abfb059d020", size = 12139749, upload-time = "2025-10-16T18:05:25.162Z" }, - { url = "https://files.pythonhosted.org/packages/47/26/7df917462c3bb5004e6fdfcc505a49e90bcd8a34c54a051953118c00b53a/ruff-0.14.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:0826b8764f94229604fa255918d1cc45e583e38c21c203248b0bfc9a0e930be5", size = 12544758, upload-time = "2025-10-16T18:05:28.018Z" }, - { url = "https://files.pythonhosted.org/packages/64/d0/81e7f0648e9764ad9b51dd4be5e5dac3fcfff9602428ccbae288a39c2c22/ruff-0.14.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cbc52160465913a1a3f424c81c62ac8096b6a491468e7d872cb9444a860bc33d", size = 13221811, upload-time = "2025-10-16T18:05:30.707Z" }, - { url = "https://files.pythonhosted.org/packages/c3/07/3c45562c67933cc35f6d5df4ca77dabbcd88fddaca0d6b8371693d29fd56/ruff-0.14.1-py3-none-win32.whl", hash = "sha256:e037ea374aaaff4103240ae79168c0945ae3d5ae8db190603de3b4012bd1def6", size = 12319467, upload-time = "2025-10-16T18:05:33.261Z" }, - { url = "https://files.pythonhosted.org/packages/02/88/0ee4ca507d4aa05f67e292d2e5eb0b3e358fbcfe527554a2eda9ac422d6b/ruff-0.14.1-py3-none-win_amd64.whl", hash = "sha256:59d599cdff9c7f925a017f6f2c256c908b094e55967f93f2821b1439928746a1", size = 13401123, upload-time = "2025-10-16T18:05:35.984Z" }, - { url = "https://files.pythonhosted.org/packages/b8/81/4b6387be7014858d924b843530e1b2a8e531846807516e9bea2ee0936bf7/ruff-0.14.1-py3-none-win_arm64.whl", hash = "sha256:e3b443c4c9f16ae850906b8d0a707b2a4c16f8d2f0a7fe65c475c5886665ce44", size = 12436636, upload-time = "2025-10-16T18:05:38.995Z" }, +version = "0.14.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ee/34/8218a19b2055b80601e8fd201ec723c74c7fe1ca06d525a43ed07b6d8e85/ruff-0.14.2.tar.gz", hash = "sha256:98da787668f239313d9c902ca7c523fe11b8ec3f39345553a51b25abc4629c96", size = 5539663, upload-time = "2025-10-23T19:37:00.956Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/16/dd/23eb2db5ad9acae7c845700493b72d3ae214dce0b226f27df89216110f2b/ruff-0.14.2-py3-none-linux_armv6l.whl", hash = "sha256:7cbe4e593505bdec5884c2d0a4d791a90301bc23e49a6b1eb642dd85ef9c64f1", size = 12533390, upload-time = "2025-10-23T19:36:18.044Z" }, + { url = "https://files.pythonhosted.org/packages/5a/8c/5f9acff43ddcf3f85130d0146d0477e28ccecc495f9f684f8f7119b74c0d/ruff-0.14.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:8d54b561729cee92f8d89c316ad7a3f9705533f5903b042399b6ae0ddfc62e11", size = 12887187, upload-time = "2025-10-23T19:36:22.664Z" }, + { url = "https://files.pythonhosted.org/packages/99/fa/047646491479074029665022e9f3dc6f0515797f40a4b6014ea8474c539d/ruff-0.14.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:5c8753dfa44ebb2cde10ce5b4d2ef55a41fb9d9b16732a2c5df64620dbda44a3", size = 11925177, upload-time = "2025-10-23T19:36:24.778Z" }, + { url = "https://files.pythonhosted.org/packages/15/8b/c44cf7fe6e59ab24a9d939493a11030b503bdc2a16622cede8b7b1df0114/ruff-0.14.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d0bbeffb8d9f4fccf7b5198d566d0bad99a9cb622f1fc3467af96cb8773c9e3", size = 12358285, upload-time = "2025-10-23T19:36:26.979Z" }, + { url = "https://files.pythonhosted.org/packages/45/01/47701b26254267ef40369aea3acb62a7b23e921c27372d127e0f3af48092/ruff-0.14.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7047f0c5a713a401e43a88d36843d9c83a19c584e63d664474675620aaa634a8", size = 12303832, upload-time = "2025-10-23T19:36:29.192Z" }, + { url = "https://files.pythonhosted.org/packages/2d/5c/ae7244ca4fbdf2bee9d6405dcd5bc6ae51ee1df66eb7a9884b77b8af856d/ruff-0.14.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bf8d2f9aa1602599217d82e8e0af7fd33e5878c4d98f37906b7c93f46f9a839", size = 13036995, upload-time = "2025-10-23T19:36:31.861Z" }, + { url = "https://files.pythonhosted.org/packages/27/4c/0860a79ce6fd4c709ac01173f76f929d53f59748d0dcdd662519835dae43/ruff-0.14.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:1c505b389e19c57a317cf4b42db824e2fca96ffb3d86766c1c9f8b96d32048a7", size = 14512649, upload-time = "2025-10-23T19:36:33.915Z" }, + { url = "https://files.pythonhosted.org/packages/7f/7f/d365de998069720a3abfc250ddd876fc4b81a403a766c74ff9bde15b5378/ruff-0.14.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a307fc45ebd887b3f26b36d9326bb70bf69b01561950cdcc6c0bdf7bb8e0f7cc", size = 14088182, upload-time = "2025-10-23T19:36:36.983Z" }, + { url = "https://files.pythonhosted.org/packages/6c/ea/d8e3e6b209162000a7be1faa41b0a0c16a133010311edc3329753cc6596a/ruff-0.14.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:61ae91a32c853172f832c2f40bd05fd69f491db7289fb85a9b941ebdd549781a", size = 13599516, upload-time = "2025-10-23T19:36:39.208Z" }, + { url = "https://files.pythonhosted.org/packages/fa/ea/c7810322086db68989fb20a8d5221dd3b79e49e396b01badca07b433ab45/ruff-0.14.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1967e40286f63ee23c615e8e7e98098dedc7301568bd88991f6e544d8ae096", size = 13272690, upload-time = "2025-10-23T19:36:41.453Z" }, + { url = "https://files.pythonhosted.org/packages/a9/39/10b05acf8c45786ef501d454e00937e1b97964f846bf28883d1f9619928a/ruff-0.14.2-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:2877f02119cdebf52a632d743a2e302dea422bfae152ebe2f193d3285a3a65df", size = 13496497, upload-time = "2025-10-23T19:36:43.61Z" }, + { url = "https://files.pythonhosted.org/packages/59/a1/1f25f8301e13751c30895092485fada29076e5e14264bdacc37202e85d24/ruff-0.14.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e681c5bc777de5af898decdcb6ba3321d0d466f4cb43c3e7cc2c3b4e7b843a05", size = 12266116, upload-time = "2025-10-23T19:36:45.625Z" }, + { url = "https://files.pythonhosted.org/packages/5c/fa/0029bfc9ce16ae78164e6923ef392e5f173b793b26cc39aa1d8b366cf9dc/ruff-0.14.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e21be42d72e224736f0c992cdb9959a2fa53c7e943b97ef5d081e13170e3ffc5", size = 12281345, upload-time = "2025-10-23T19:36:47.618Z" }, + { url = "https://files.pythonhosted.org/packages/a5/ab/ece7baa3c0f29b7683be868c024f0838770c16607bea6852e46b202f1ff6/ruff-0.14.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:b8264016f6f209fac16262882dbebf3f8be1629777cf0f37e7aff071b3e9b92e", size = 12629296, upload-time = "2025-10-23T19:36:49.789Z" }, + { url = "https://files.pythonhosted.org/packages/a4/7f/638f54b43f3d4e48c6a68062794e5b367ddac778051806b9e235dfb7aa81/ruff-0.14.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:5ca36b4cb4db3067a3b24444463ceea5565ea78b95fe9a07ca7cb7fd16948770", size = 13371610, upload-time = "2025-10-23T19:36:51.882Z" }, + { url = "https://files.pythonhosted.org/packages/8d/35/3654a973ebe5b32e1fd4a08ed2d46755af7267da7ac710d97420d7b8657d/ruff-0.14.2-py3-none-win32.whl", hash = "sha256:41775927d287685e08f48d8eb3f765625ab0b7042cc9377e20e64f4eb0056ee9", size = 12415318, upload-time = "2025-10-23T19:36:53.961Z" }, + { url = "https://files.pythonhosted.org/packages/71/30/3758bcf9e0b6a4193a6f51abf84254aba00887dfa8c20aba18aa366c5f57/ruff-0.14.2-py3-none-win_amd64.whl", hash = "sha256:0df3424aa5c3c08b34ed8ce099df1021e3adaca6e90229273496b839e5a7e1af", size = 13565279, upload-time = "2025-10-23T19:36:56.578Z" }, + { url = "https://files.pythonhosted.org/packages/2e/5d/aa883766f8ef9ffbe6aa24f7192fb71632f31a30e77eb39aa2b0dc4290ac/ruff-0.14.2-py3-none-win_arm64.whl", hash = "sha256:ea9d635e83ba21569fbacda7e78afbfeb94911c9434aff06192d9bc23fd5495a", size = 12554956, upload-time = "2025-10-23T19:36:58.714Z" }, ] [[package]]