feat(ai): Improve ranker prompt robustness#1567
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| criteria_prompt=criteria_prompt, | ||
| model_name=model_name, | ||
| model_provider=model_provider, | ||
| min_items=1, | ||
| max_items=1, | ||
| ) |
There was a problem hiding this comment.
Rank documents now returns only one result
The new min_items=1 and max_items=1 arguments passed to rank_items_pairwise and rank_items force the LLM to return exactly one ID. rank_documents previously produced a full ranking and the docstring still advertises a list "from most to least relevant". With these bounds the function will now always return a single document regardless of the input size, dropping the rest of the ordering, which breaks existing callers expecting a complete ranking.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
1 issue found across 3 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/tracecat-registry/tracecat_registry/core/ai.py">
<violation number="1" location="packages/tracecat-registry/tracecat_registry/core/ai.py:107">
Passing max_items=1 here truncates rank_documents to a single document, so the function no longer returns the full ranking it promises.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
| num_passes=num_passes, | ||
| refinement_ratio=refinement_ratio, | ||
| min_items=1, | ||
| max_items=1, |
There was a problem hiding this comment.
Passing max_items=1 here truncates rank_documents to a single document, so the function no longer returns the full ranking it promises.
Prompt for AI agents
Address the following comment on packages/tracecat-registry/tracecat_registry/core/ai.py at line 107:
<comment>Passing max_items=1 here truncates rank_documents to a single document, so the function no longer returns the full ranking it promises.</comment>
<file context>
@@ -101,13 +103,17 @@ async def rank_documents(
num_passes=num_passes,
refinement_ratio=refinement_ratio,
+ min_items=1,
+ max_items=1,
)
elif algorithm == "single-pass":
</file context>
| max_items=1, | |
| max_items=len(dict_items), |
Summary by cubic
Strengthened the ranker by rewriting prompts and validating outputs, and added min/max item limits to support exact or top‑N results. This makes rankings more consistent and lets callers control how many items come back.
New Features
Migration