-
Notifications
You must be signed in to change notification settings - Fork 13
feat: add local hive test runner script #201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughA new local runner script is added that executes Hive test scenarios from the GitHub Actions workflow locally. It loads scenario definitions, validates prerequisites, optionally builds the Hive binary and Docker image, runs scenarios sequentially, validates results against expected outcomes, and reports status. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as run-hive-local.py
participant FS as File System
participant Build as Build System
participant Hive as Hive Binary
participant Parser as Parse Tool
participant Output as Output/Report
User->>CLI: Run script (scenario, options)
CLI->>FS: Load hive.yml scenarios
FS-->>CLI: Scenario definitions
alt scenario specified
CLI->>CLI: Find matching scenario
else --all flag
CLI->>CLI: Use all scenarios
end
CLI->>FS: Check prerequisites (hive.go exists)
alt build required & not --skip-build
CLI->>Build: Build Hive binary
Build-->>CLI: Build complete
alt Docker image needed
CLI->>Build: Build Docker image
Build-->>CLI: Image ready
end
end
loop for each scenario
CLI->>CLI: Compute scenario filters
CLI->>Hive: Execute scenario with filters
Hive->>FS: Generate results JSON
FS-->>CLI: Results file located
CLI->>Parser: Validate results (check failures/ignores)
Parser-->>CLI: Validation status
alt validation passed
CLI->>Output: Print ✓ status
else validation failed
CLI->>Output: Print ✗ status
end
end
CLI->>Output: Report summary (pass/fail)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
2a27638 to
f8d404d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
scripts/run-hive-local.py (5)
18-18: Document the PyYAML dependency.The script requires PyYAML but this dependency isn't documented. Consider adding installation instructions in the docstring or a comment, e.g.,
pip install PyYAML.
22-22: Consider making HIVE_DIR configurable via environment variable.The script assumes the hive repository is at
../hive. While line 132 checks for its existence, the error message at line 134 mentions "Set HIVE_DIR" but the script doesn't actually support this. Consider allowing users to override via an environment variable.🔎 Apply this diff to support HIVE_DIR environment variable:
-HIVE_DIR = BERA_RETH_DIR.parent / "hive" +HIVE_DIR = Path(os.environ.get("HIVE_DIR", BERA_RETH_DIR.parent / "hive"))
25-28: Add error handling for file and YAML parsing failures.The function will raise unhandled exceptions if the workflow file is missing or has an unexpected structure. This would produce cryptic errors for users.
🔎 Apply this diff to add error handling:
def load_scenarios(): - with open(HIVE_WORKFLOW) as f: - workflow = yaml.safe_load(f) - return workflow["jobs"]["test"]["strategy"]["matrix"]["scenario"] + try: + with open(HIVE_WORKFLOW) as f: + workflow = yaml.safe_load(f) + return workflow["jobs"]["test"]["strategy"]["matrix"]["scenario"] + except FileNotFoundError: + print(f"Error: Workflow file not found at {HIVE_WORKFLOW}") + sys.exit(1) + except (KeyError, TypeError) as e: + print(f"Error: Unexpected workflow structure in {HIVE_WORKFLOW}: {e}") + sys.exit(1)
72-76: Avoid global side effect withos.chdir.Using
os.chdirchanges the working directory for the entire process. Consider passingcwdtosubprocess.runinstead to keep the side effect local.🔎 Apply this diff to use cwd parameter:
print() print(f"==> Running: {sim}" + (f" (filter: {filter_str})" if filter_str else "")) - os.chdir(HIVE_DIR) args = ["./hive", "--sim", sim, "--client", "bera-reth", "--sim.parallelism", "8"] if filter_str: args.extend(["--sim.limit", filter_str]) start_time = time.time() # The hive process returns non-zero exit code when tests fail, even on expected # failures so we need to parse the JSON to check if failures are expected - result = subprocess.run(args) + result = subprocess.run(args, cwd=HIVE_DIR)
137-140: Fix typo in comment.Line 138 has a typo: "exst" should be "exist".
🔎 Apply this diff to fix the typo:
- # Build hive if the binary does not exst + # Build hive if the binary does not exist if not (HIVE_DIR / "hive").exists():
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
scripts/run-hive-local.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.8)
scripts/run-hive-local.py
81-81: subprocess call: check for execution of untrusted input
(S603)
101-101: subprocess call: check for execution of untrusted input
(S603)
102-110: Starting a process with a partial executable path
(S607)
140-140: Starting a process with a partial executable path
(S607)
145-145: subprocess call: check for execution of untrusted input
(S603)
146-158: Starting a process with a partial executable path
(S607)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Security audit
- GitHub Check: Clippy lint
- GitHub Check: Run tests
- GitHub Check: Check documentation
🔇 Additional comments (6)
scripts/run-hive-local.py (6)
31-52: LGTM!The filter construction logic correctly handles all combinations of limit and include parameters. The list function provides clear output for users.
55-65: LGTM!The scenario matching logic correctly handles both exact sim matches and optional limit filtering.
84-97: JSON file selection approach is acceptable for sequential execution.The modification-time-based approach for finding result files works well for the intended sequential use case. The comment at line 96 appropriately warns users about the limitation.
143-160: LGTM!The Docker build command is correctly structured with appropriate error handling via
check=True.
162-188: LGTM!The scenario execution logic correctly handles both single and multiple scenario modes, tracks failures appropriately, and returns proper exit codes.
81-81: Static analysis security warnings are false positives.The static analysis tools flag subprocess calls as potential security issues (S603, S607). However, these are false positives in this context:
- All paths are constructed from controlled Path objects or repository-relative paths
- Commands (
go,docker,python3) are standard system tools- No user input is passed unsanitized to subprocess calls
- This is a developer tool running in a trusted environment
These warnings can be safely ignored for this script.
Also applies to: 101-111, 140-140, 145-160
This PR adds a python script to run hive tests locally using the same scenario configuration as have configured to run in our CI. When running this script it parses scenarios directly from
.github/workflows/hive.yml, takes care of building a new bera-reth docker container (unless --skip-build is set), runs the hive command against selected scenarios, parses the generated json results and compares it against the expected_failures.yaml and ignored_tests.yaml.Test plan
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.