-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
Describe the bug:
SIEM Auto Migration (rule and dashboard migration) fails with a 400 Bad Request error when using Claude 4.6 Opus or Claude 4.6 Sonnet models. The migration prompts use assistant message prefill (['ai', '...'] as the last message in LangChain ChatPromptTemplate.fromMessages()), which Claude 4.6 models no longer support. Other Security AI features (AI Assistant, Attack Discovery, etc.) work correctly with these same connectors.
There are 6 prefill instances across 5 prompt files, all under x-pack/solutions/security/plugins/security_solution/server/lib/siem_migrations/:
rules/task/agent/nodes/match_prebuilt_rule/prompts.ts(lines 68, 108)rules/task/agent/nodes/create_semantic_query/prompts.ts(line 47)rules/task/agent/sub_graphs/translate_rule/nodes/retrieve_integrations/prompts.ts(line 66)common/task/agent/helpers/inline_spl_query/prompts.ts(line 161)dashboards/task/agent/nodes/create_descriptions/prompts.ts(line 78)
These prefill lines have been present since Dec 2024–Sep 2025 and worked on older Claude models. The fix is to remove the ['ai', '...'] prefill lines — the same formatting guidance is already present in <example_response> sections within the human messages.
Code owners: @elastic/security-threat-hunting
Kibana/Elasticsearch Stack version:
All versions on main (9.4.0) and likely all active release branches (prefill pattern present since Dec 2024).
Server OS version:
N/A — not OS-specific.
Browser and Browser OS versions:
N/A — server-side issue.
Elastic Endpoint version:
N/A
Original install method (e.g. download page, yum, from source, etc.):
N/A
Functional Area (e.g. Endpoint management, timelines, resolver, etc.):
SIEM Automatic Migrations (rule migration, dashboard migration)
Steps to reproduce:
- Configure a connector using Claude 4.6 Opus (
.anthropic-claude-4.6-opus-chat_completion) or Claude 4.6 Sonnet. - Navigate to SIEM Rule Migration and start an auto migration.
- Observe 400 errors for every rule/dashboard being migrated.
Current behavior:
Migration fails with: "Received a bad request status code for request from inference entity id [.anthropic-claude-4.6-opus-chat_completion] status [400]. Error message: [Provider returned error]" and "API Error: Bad Request - The model returned the following errors: This model does not support assistant message prefill. The conversation must end with a user message."
Expected behavior:
Migration should complete successfully regardless of Claude model version.
Screenshots (if relevant):
(attach the three error screenshots from the migration UI)
Errors in browser console (if relevant):
N/A — errors originate server-side from the Anthropic API response.
Provide logs and/or server output (if relevant):
Error calling connector: Status code: 400. Message: API Error: Bad Request - The model returned the following errors: This model does not support assistant message prefill. The conversation must end with a user message.
Any additional context (logs, chat logs, magical formulas, etc.):
The root cause is the use of ['ai', '<prefill text>'] as the final message in ChatPromptTemplate.fromMessages() across the 5 prompt files listed above. This sends a partial assistant response to "prime" the model output format. Claude 4.6 models reject this pattern. The recommended fix is to simply remove these 6 ['ai', '...'] lines — the prefill text is redundant with the <example_response> sections already in the human messages, and removal is backward-compatible with older models and other LLM providers.