Skip to content

Commit cd453f9

Browse files
SkylarKeltyclaude
andcommitted
fix: prevent LLM tool calls in result selection and handle content block lists
Add tool_choice: "none" when tool definitions are included in chat completion requests — tools are only declared so the API accepts the tool-call message history, not for the model to invoke. Also handle APIs that return content as a list of blocks rather than a plain string. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 55f41cb commit cd453f9

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

artemis/llm.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -322,6 +322,9 @@ async def chat_completion(
322322
}
323323
for name in sorted(tool_names)
324324
]
325+
# Tool defs are only declared so the API accepts the tool-call
326+
# history — we never want the model to call tools in its response.
327+
body["tool_choice"] = "none"
325328
if response_format is not None:
326329
body["response_format"] = response_format
327330

@@ -362,6 +365,12 @@ async def chat_completion(
362365
)
363366

364367
content = message.get("content")
368+
# Some APIs return content as a list of blocks, e.g. [{"type":"text","text":"..."}]
369+
if isinstance(content, list):
370+
content = "".join(
371+
block.get("text", "") for block in content
372+
if isinstance(block, dict) and block.get("type") == "text"
373+
)
365374
if not isinstance(content, str) or not content.strip():
366375
raise UpstreamServiceError("The LLM backend returned empty content.")
367376

0 commit comments

Comments
 (0)