Skip to content

feat: use Pydantic models for MCP submit_evaluation tool parameters#102

Merged
tarilabs merged 3 commits intoeval-hub:mainfrom
tarilabs:tarilabs-20260330-mpctypes
Mar 30, 2026
Merged

feat: use Pydantic models for MCP submit_evaluation tool parameters#102
tarilabs merged 3 commits intoeval-hub:mainfrom
tarilabs:tarilabs-20260330-mpctypes

Conversation

@tarilabs
Copy link
Copy Markdown
Member

@tarilabs tarilabs commented Mar 30, 2026

What and why

Replace parameters types with typed Pydantic models this way FastMCP generates congruent JSON Schema
for AI agents

Type

  • feat
  • fix
  • docs
  • refactor / chore
  • test / ci

Testing

  • Tests added or updated
  • Tested manually

Breaking changes

none (unreleased capability yet, still iterated)

Summary by CodeRabbit

Release Notes

  • Refactor
    • Evaluation submission API now requires strongly-typed configuration objects instead of raw dictionaries for model, benchmarks, collection, and experiment parameters. The API maintains validation ensuring either benchmarks or collection (but not both) are provided. This change improves type safety and prevents configuration-related errors.

Replace parameters types with typed Pydantic models
this way FastMCP generates congruent JSON Schema
for AI agents

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: tarilabs <matteo.mortari@gmail.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 30, 2026

Warning

Rate limit exceeded

@tarilabs has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 45 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 6 minutes and 45 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 268a8ff4-c084-46cb-8954-130c33179bb2

📥 Commits

Reviewing files that changed from the base of the PR and between 0d9b2dc and 3d7223b.

📒 Files selected for processing (2)
  • src/evalhub/mcp/server.py
  • tests/unit/test_mcp_server.py
📝 Walkthrough

Walkthrough

The submit_evaluation function in src/evalhub/mcp/server.py now accepts strongly-typed configuration objects (ModelConfig, BenchmarkConfig, CollectionRef, ExperimentConfig) instead of raw dictionaries. Unit tests were updated to construct and pass typed objects matching the new signature. Control flow logic and validation remain unchanged.

Changes

Cohort / File(s) Summary
Function Signature Refinement
src/evalhub/mcp/server.py
Updated submit_evaluation parameter types from dict[str, Any] to strongly-typed configuration objects: ModelConfig, list[BenchmarkConfig] | None, CollectionRef | None, and ExperimentConfig | None. Docstrings and tool descriptions updated accordingly.
Test Updates
tests/unit/test_mcp_server.py
Updated test calls to construct typed objects (ModelConfig, BenchmarkConfig, CollectionRef, ExperimentConfig, ModelAuth) instead of raw dictionaries. Added imports for new types and minor formatting adjustments.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰✨ Type-safe configs hop into place,
No more dicts to muddy the space!
Strong types guide each parameter's way,
Cleaner calls make the coder's day.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: replacing dict parameters with Pydantic models for the MCP submit_evaluation tool, which aligns with the core changes in the pull request.
Description check ✅ Passed The description covers the essential sections with 'What and why', change type selection, and testing status all provided, though the 'Closes #' issue link is missing.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/unit/test_mcp_server.py (1)

280-365: ⚠️ Potential issue | 🟠 Major

Add schema/wire-path assertions for MCP tool invocation.

Current tests invoke submit_evaluation directly as a Python function, bypassing FastMCP's auto-generated inputSchema and JSON argument deserialization. Add at least one test that verifies the tool's schema and/or invokes it through MCP's tool-call mechanism (e.g., via mcp.call_tool() or similar) with JSON-like arguments to validate that typed fields (model, benchmarks, collection, experiment) deserialize correctly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/test_mcp_server.py` around lines 280 - 365, Add a test that
exercises the MCP tool path (not only the Python function) by calling the
auto-generated tool schema for submit_evaluation so JSON-like args are
deserialized; use the tool invocation API (e.g., mcp.call_tool or the test
harness's equivalent) to invoke the submit_evaluation tool with a JSON dict
containing fields like model (as object with url/name and optional
auth.secret_ref), benchmarks (list of {id,provider_id}), collection (object with
id), and experiment (object with name) and assert the resulting request sent to
mock_client.jobs.submit has the expected typed fields (reference symbols:
submit_evaluation, inputSchema, ModelConfig, BenchmarkConfig, CollectionRef,
ExperimentConfig, mock_client.jobs.submit); include at least one test case
verifying that benchmarks deserialize into request.benchmarks and one verifying
collection deserializes into request.collection, and keep existing ValueError
behavior tests unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/evalhub/mcp/server.py`:
- Around line 253-255: The exclusivity check uses truthiness so benchmarks=[] is
treated as absent; change the check to detect presence vs None (use "benchmarks
is not None" instead of "bool(benchmarks)") wherever the request
validation/exclusivity is enforced (referencing the benchmarks and collection
variables in the request handling function in src/evalhub/mcp/server.py) and
ensure the validation logic rejects requests that provide both benchmarks
(including an empty list) and collection; also update any accompanying error
message to reflect that an explicitly provided empty benchmarks list still
counts as supplying benchmarks.

---

Outside diff comments:
In `@tests/unit/test_mcp_server.py`:
- Around line 280-365: Add a test that exercises the MCP tool path (not only the
Python function) by calling the auto-generated tool schema for submit_evaluation
so JSON-like args are deserialized; use the tool invocation API (e.g.,
mcp.call_tool or the test harness's equivalent) to invoke the submit_evaluation
tool with a JSON dict containing fields like model (as object with url/name and
optional auth.secret_ref), benchmarks (list of {id,provider_id}), collection
(object with id), and experiment (object with name) and assert the resulting
request sent to mock_client.jobs.submit has the expected typed fields (reference
symbols: submit_evaluation, inputSchema, ModelConfig, BenchmarkConfig,
CollectionRef, ExperimentConfig, mock_client.jobs.submit); include at least one
test case verifying that benchmarks deserialize into request.benchmarks and one
verifying collection deserializes into request.collection, and keep existing
ValueError behavior tests unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6be1d37e-6161-43f6-936d-0e906026375b

📥 Commits

Reviewing files that changed from the base of the PR and between 6539586 and 0d9b2dc.

📒 Files selected for processing (2)
  • src/evalhub/mcp/server.py
  • tests/unit/test_mcp_server.py

tarilabs and others added 2 commits March 30, 2026 07:44
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: tarilabs <matteo.mortari@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: tarilabs <matteo.mortari@gmail.com>
@tarilabs tarilabs merged commit 0c0fb3d into eval-hub:main Mar 30, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants