diff --git a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md index badefcdc8d..0abf3f4c2b 100644 --- a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md +++ b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md @@ -32,7 +32,16 @@ Follow these guides to connect to one or more third-party LLM providers: You can [connect to LM Studio](/solutions/security/ai/connect-to-own-local-llm.md) to use a custom LLM deployed and managed by you. +## Preconfigured connectors +```{applies_to} +stack: ga 9.0 +serverless: unavailable +``` + +You can use [preconfigured connectors](kibana://reference/connectors-kibana/pre-configured-connectors.md) to set up a third-party LLM connector. + +If you use a preconfigured connector for your LLM connector we recommend you add the `exposeConfig: true` parameter within the `xpack.actions.preconfigured` section of the `kibana.yml` config file. This parameter makes debugging easier by adding configuration information to the debug logs, including which large language model the connector uses.