Skip to content

Commit b61e3a2

Browse files
kibanamachinespong
andauthored
[9.2] [Security Assistant] Prioritize connector defaultModel over stored conversation model (#237947) (#239239)
# Backport This will backport the following commits from `main` to `9.2`: - [[Security Assistant] Prioritize connector defaultModel over stored conversation model (#237947)](#237947) <!--- Backport version: 9.6.6 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sorenlouv/backport) <!--BACKPORT [{"author":{"name":"Garrett Spong","email":"[email protected]"},"sourceCommit":{"committedDate":"2025-10-15T20:35:58Z","message":"[Security Assistant] Prioritize connector defaultModel over stored conversation model (#237947)\n\n### Summary\nWhen a connector's `defaultModel` is updated, existing conversations now\nuse the updated model instead of the stale model stored in the\nconversation's `apiConfig`.\n\nChanges:\n- Modified `assistant/api/index.tsx` to no longer send `model` from the\nconversation\n- Modified `default_assistant_graph/index.ts` to send the connector's\ndefault model if `undefined`. This is for ActionsClient only, as\nInferenceChat falls back to the connector default appropriately.\n- Modified `getConversationApiConfig` to prioritize connectorModel over\nstored model\n- Added tests to verify connector model takes precedence\n- Added fallback test when connector has no model\n\n\nThis ensures conversations always use the latest connector\nconfiguration, fixing the issue where OpenAI connectors with 'Other'\nprovider would not pick up modelId changes for existing conversations.\n\n### Testing Instructions for PR\n1. Create OpenAI connector with modelId \"model-v1\"\n2. Create conversation and send message\n3. Update connector modelId to \"model-v2\"\n4. Return to conversation and send another message\n5. Verify request is fulfilled with new model\n\nCreated with Cursor and Claude 4.5 Sonnet\n\n### Release Notes\nPrioritize connector defaultModel over stored conversation model\n\n### Checklist\n\n- [X] [Unit or functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere updated or added to match the most common scenarios\n- [X] The PR description includes the appropriate Release Notes section,\nand the correct `release_note:*` label is applied per the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n- [X] Review the [backport\nguidelines](https://docs.google.com/document/d/1VyN5k91e5OVumlc0Gb9RPa3h1ewuPE705nRtioPiTvY/edit?usp=sharing)\nand apply applicable `backport:*` labels.","sha":"e5ec416caaa3c33219c05eaf28854098852fc870","branchLabelMapping":{"^v9.3.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:fix","sdh-linked","Feature:Security Assistant","Team:Security Generative AI","backport:version","v9.1.0","v8.19.0","v9.2.0","v9.3.0"],"title":"[Security Assistant] Prioritize connector defaultModel over stored conversation model","number":237947,"url":"https://github.com/elastic/kibana/pull/237947","mergeCommit":{"message":"[Security Assistant] Prioritize connector defaultModel over stored conversation model (#237947)\n\n### Summary\nWhen a connector's `defaultModel` is updated, existing conversations now\nuse the updated model instead of the stale model stored in the\nconversation's `apiConfig`.\n\nChanges:\n- Modified `assistant/api/index.tsx` to no longer send `model` from the\nconversation\n- Modified `default_assistant_graph/index.ts` to send the connector's\ndefault model if `undefined`. This is for ActionsClient only, as\nInferenceChat falls back to the connector default appropriately.\n- Modified `getConversationApiConfig` to prioritize connectorModel over\nstored model\n- Added tests to verify connector model takes precedence\n- Added fallback test when connector has no model\n\n\nThis ensures conversations always use the latest connector\nconfiguration, fixing the issue where OpenAI connectors with 'Other'\nprovider would not pick up modelId changes for existing conversations.\n\n### Testing Instructions for PR\n1. Create OpenAI connector with modelId \"model-v1\"\n2. Create conversation and send message\n3. Update connector modelId to \"model-v2\"\n4. Return to conversation and send another message\n5. Verify request is fulfilled with new model\n\nCreated with Cursor and Claude 4.5 Sonnet\n\n### Release Notes\nPrioritize connector defaultModel over stored conversation model\n\n### Checklist\n\n- [X] [Unit or functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere updated or added to match the most common scenarios\n- [X] The PR description includes the appropriate Release Notes section,\nand the correct `release_note:*` label is applied per the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n- [X] Review the [backport\nguidelines](https://docs.google.com/document/d/1VyN5k91e5OVumlc0Gb9RPa3h1ewuPE705nRtioPiTvY/edit?usp=sharing)\nand apply applicable `backport:*` labels.","sha":"e5ec416caaa3c33219c05eaf28854098852fc870"}},"sourceBranch":"main","suggestedTargetBranches":["9.1","8.19","9.2"],"targetPullRequestStates":[{"branch":"9.1","label":"v9.1.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.19","label":"v8.19.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"9.2","label":"v9.2.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v9.3.0","branchLabelMappingKey":"^v9.3.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/237947","number":237947,"mergeCommit":{"message":"[Security Assistant] Prioritize connector defaultModel over stored conversation model (#237947)\n\n### Summary\nWhen a connector's `defaultModel` is updated, existing conversations now\nuse the updated model instead of the stale model stored in the\nconversation's `apiConfig`.\n\nChanges:\n- Modified `assistant/api/index.tsx` to no longer send `model` from the\nconversation\n- Modified `default_assistant_graph/index.ts` to send the connector's\ndefault model if `undefined`. This is for ActionsClient only, as\nInferenceChat falls back to the connector default appropriately.\n- Modified `getConversationApiConfig` to prioritize connectorModel over\nstored model\n- Added tests to verify connector model takes precedence\n- Added fallback test when connector has no model\n\n\nThis ensures conversations always use the latest connector\nconfiguration, fixing the issue where OpenAI connectors with 'Other'\nprovider would not pick up modelId changes for existing conversations.\n\n### Testing Instructions for PR\n1. Create OpenAI connector with modelId \"model-v1\"\n2. Create conversation and send message\n3. Update connector modelId to \"model-v2\"\n4. Return to conversation and send another message\n5. Verify request is fulfilled with new model\n\nCreated with Cursor and Claude 4.5 Sonnet\n\n### Release Notes\nPrioritize connector defaultModel over stored conversation model\n\n### Checklist\n\n- [X] [Unit or functional\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\nwere updated or added to match the most common scenarios\n- [X] The PR description includes the appropriate Release Notes section,\nand the correct `release_note:*` label is applied per the\n[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)\n- [X] Review the [backport\nguidelines](https://docs.google.com/document/d/1VyN5k91e5OVumlc0Gb9RPa3h1ewuPE705nRtioPiTvY/edit?usp=sharing)\nand apply applicable `backport:*` labels.","sha":"e5ec416caaa3c33219c05eaf28854098852fc870"}}]}] BACKPORT--> Co-authored-by: Garrett Spong <[email protected]>
1 parent 45cb0af commit b61e3a2

File tree

5 files changed

+60
-8
lines changed

5 files changed

+60
-8
lines changed

x-pack/platform/packages/shared/kbn-elastic-assistant/impl/assistant/api/index.test.tsx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ describe('API tests', () => {
7979
'/internal/elastic_assistant/actions/connector/foo/_execute',
8080
{
8181
...staticDefaults,
82-
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
82+
body: '{"message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
8383
}
8484
);
8585
});
@@ -91,7 +91,7 @@ describe('API tests', () => {
9191
'/internal/elastic_assistant/actions/connector/foo/_execute',
9292
{
9393
...streamingDefaults,
94-
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
94+
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
9595
}
9696
);
9797
});
@@ -164,7 +164,7 @@ describe('API tests', () => {
164164
'/internal/elastic_assistant/actions/connector/foo/_execute',
165165
{
166166
...staticDefaults,
167-
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"screenContext":{"timeZone":"America/New_York"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
167+
body: '{"message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"screenContext":{"timeZone":"America/New_York"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
168168
}
169169
);
170170
});

x-pack/platform/packages/shared/kbn-elastic-assistant/impl/assistant/api/index.tsx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,6 @@ export const fetchConnectorExecuteAction = async ({
6868
});
6969

7070
const requestBody = {
71-
model: apiConfig?.model,
7271
message,
7372
subAction: isStream ? 'invokeStream' : 'invokeAI',
7473
conversationId,

x-pack/platform/packages/shared/kbn-elastic-assistant/impl/assistant/use_conversation/helpers.test.ts

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -347,5 +347,55 @@ describe('useConversation helpers', () => {
347347
},
348348
});
349349
});
350+
351+
test('should prioritize connector defaultModel over stored conversation model', () => {
352+
const connectorsWithModel: AIConnector[] = [
353+
{
354+
id: '123',
355+
actionTypeId: '.gen-ai',
356+
apiProvider: OpenAiProviderType.OpenAi,
357+
config: {
358+
provider: OpenAiProviderType.OpenAi,
359+
defaultModel: 'gpt-4-turbo',
360+
},
361+
},
362+
] as unknown as AIConnector[];
363+
364+
const result = getConversationApiConfig({
365+
allSystemPrompts,
366+
conversation, // has stored model: 'gpt-3'
367+
connectors: connectorsWithModel,
368+
defaultConnector,
369+
});
370+
371+
expect(result).toEqual({
372+
apiConfig: {
373+
connectorId: '123',
374+
actionTypeId: '.gen-ai',
375+
provider: OpenAiProviderType.OpenAi,
376+
defaultSystemPromptId: '2',
377+
model: 'gpt-4-turbo', // Should use connector's defaultModel, not conversation's stored 'gpt-3'
378+
},
379+
});
380+
});
381+
382+
test('should fall back to stored conversation model when connector has no defaultModel', () => {
383+
const result = getConversationApiConfig({
384+
allSystemPrompts,
385+
conversation, // has stored model: 'gpt-3'
386+
connectors, // connectors have no defaultModel set
387+
defaultConnector,
388+
});
389+
390+
expect(result).toEqual({
391+
apiConfig: {
392+
connectorId: '123',
393+
actionTypeId: '.gen-ai',
394+
provider: OpenAiProviderType.OpenAi,
395+
defaultSystemPromptId: '2',
396+
model: 'gpt-3', // Should fall back to conversation's stored model
397+
},
398+
});
399+
});
350400
});
351401
});

x-pack/platform/packages/shared/kbn-elastic-assistant/impl/assistant/use_conversation/helpers.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ export const getConversationApiConfig = ({
133133
actionTypeId: connector.actionTypeId,
134134
provider: connector.apiProvider ?? connectorApiProvider,
135135
defaultSystemPromptId: defaultSystemPrompt?.id,
136-
model: conversation?.apiConfig?.model ?? connectorModel,
136+
model: connectorModel ?? conversation?.apiConfig?.model,
137137
},
138138
}
139139
: ({

x-pack/solutions/security/plugins/elastic_assistant/server/lib/langchain/graphs/default_assistant_graph/index.ts

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,8 +76,10 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
7676
* creating a new instance, we prevent other uses of llm from binding and changing
7777
* the state unintentionally. For this reason, only call createLlmInstance at runtime
7878
*/
79-
const createLlmInstance = async () =>
80-
!inferenceChatModelDisabled
79+
const createLlmInstance = async () => {
80+
const connector = await actionsClient.get({ id: connectorId });
81+
const defaultModel = connector?.config?.defaultModel;
82+
return !inferenceChatModelDisabled
8183
? inference.getChatModel({
8284
request,
8385
connectorId,
@@ -102,7 +104,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
102104
logger,
103105
// possible client model override,
104106
// let this be undefined otherwise so the connector handles the model
105-
model: request.body.model,
107+
model: request.body.model ?? defaultModel,
106108
// ensure this is defined because we default to it in the language_models
107109
// This is where the LangSmith logs (Metadata > Invocation Params) are set
108110
temperature: getDefaultArguments(llmType).temperature,
@@ -117,6 +119,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
117119
pluginId: 'security_ai_assistant',
118120
},
119121
});
122+
};
120123

121124
const anonymizationFieldsRes =
122125
await dataClients?.anonymizationFieldsDataClient?.findDocuments<EsAnonymizationFieldsSchema>({

0 commit comments

Comments
 (0)