|
| 1 | +--- |
| 2 | +navigation_title: "Use different models" |
| 3 | +applies_to: |
| 4 | + stack: preview 9.2 |
| 5 | + serverless: |
| 6 | + elasticsearch: preview |
| 7 | +--- |
| 8 | + |
| 9 | +:::{warning} |
| 10 | +These pages are currently hidden from the docs TOC and have `noindexed` meta headers. |
| 11 | + |
| 12 | +**Go to the docs [landing page](/solutions/search/elastic-agent-builder.md).** |
| 13 | +::: |
| 14 | + |
| 15 | +# Using different models in {{agent-builder}} |
| 16 | + |
| 17 | +{{agent-builder}} uses large language models (LLMs) to power agent reasoning and decision-making. By default, agents use the Elastic Managed LLM, but you can configure other models through Kibana connectors. |
| 18 | + |
| 19 | +## Default model configuration |
| 20 | + |
| 21 | +By default, {{agent-builder}} uses the Elastic Managed LLM connector running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`. |
| 22 | + |
| 23 | +This managed service requires zero setup and no additional API key management. |
| 24 | + |
| 25 | +Learn more about the [Elastic Managed LLM connector](kibana://reference/connectors-kibana/elastic-managed-llm) and [pricing](https://www.elastic.co/pricing). |
| 26 | + |
| 27 | +## Change the default model |
| 28 | + |
| 29 | +By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, you'll need a configured connector and then set it as the default. |
| 30 | + |
| 31 | +### Use a pre-configured connector |
| 32 | + |
| 33 | +1. Search for **GenAI Settings** in the global search field |
| 34 | +2. Select your preferred connector from the **Default AI Connector** dropdown |
| 35 | +3. Save your changes |
| 36 | + |
| 37 | +### Create a new connector in the UI |
| 38 | + |
| 39 | +1. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md) |
| 40 | +2. Select **Create Connector** and select your model provider |
| 41 | +3. Configure the connector with your API credentials and preferred model |
| 42 | +4. Search for **GenAI Settings** in the global search field |
| 43 | +5. Select your new connector from the **Default AI Connector** dropdown |
| 44 | +6. Save your changes |
| 45 | + |
| 46 | +For detailed instructions on creating connectors, refer to [Connectors](https://www.elastic.co/docs/deploy-manage/manage-connectors). |
| 47 | + |
| 48 | +Learn more about [preconfigured connectors](https://www.elastic.co/docs/reference/kibana/connectors-kibana/pre-configured-connectors). |
| 49 | + |
| 50 | +## Connectors API |
| 51 | + |
| 52 | +For programmatic access to connector management, refer to the [Connectors API documentation]({{kib-serverless-apis}}group/endpoint-connectors). |
| 53 | + |
| 54 | +## Recommended models |
| 55 | + |
| 56 | +{{agent-builder}} requires models with strong reasoning and tool-calling capabilities. State-of-the-art models perform significantly better than smaller or older models. |
| 57 | + |
| 58 | +### Recommended model families |
| 59 | + |
| 60 | +The following models are known to work well with {{agent-builder}}: |
| 61 | + |
| 62 | +- **OpenAI**: GPT-4.1, GPT-4o |
| 63 | +- **Anthropic**: Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7 |
| 64 | +- **Google**: Gemini 2.5 Pro |
| 65 | + |
| 66 | +### Why model quality matters |
| 67 | + |
| 68 | +Agent Builder relies on advanced LLM capabilities including: |
| 69 | + |
| 70 | +- **Function calling**: Models must accurately select appropriate tools and construct valid parameters from natural language requests |
| 71 | +- **Multi-step reasoning**: Agents need to plan, execute, and adapt based on tool results across multiple iterations |
| 72 | +- **Structured output**: Models must produce properly formatted responses that the agent framework can parse |
| 73 | + |
| 74 | +Smaller or less capable models may produce errors like: |
| 75 | + |
| 76 | +```console-response |
| 77 | +Error: Invalid function call syntax |
| 78 | +``` |
| 79 | + |
| 80 | +```console-response |
| 81 | +Error executing agent: No tool calls found in the response. |
| 82 | +``` |
| 83 | + |
| 84 | +While any chat-completion-compatible connector can technically be configured, we strongly recommend using state-of-the-art models for reliable agent performance. |
| 85 | + |
| 86 | +:::{note} |
| 87 | +GPT-4o-mini and similar smaller models are not recommended for {{agent-builder}} as they lack the necessary capabilities for reliable agent workflows. |
| 88 | +::: |
| 89 | + |
| 90 | +## Connect a local LLM |
| 91 | + |
| 92 | +You can connect a locally hosted LLM to Elastic using the OpenAI connector. This requires your local LLM to be compatible with the OpenAI API format. |
| 93 | + |
| 94 | +### Requirements |
| 95 | + |
| 96 | +**Model selection:** |
| 97 | +- Must include "instruct" in the model name to work with Elastic |
| 98 | +- Download from trusted sources only |
| 99 | +- Consider parameter size, context window, and quantization format for your needs |
| 100 | + |
| 101 | +**Integration setup:** |
| 102 | +- For Elastic Cloud: Requires a reverse proxy (such as Nginx) to authenticate requests using a bearer token and forward them to your local LLM endpoint |
| 103 | +- For self-managed deployments on the same host as your LLM: Can connect directly without a reverse proxy |
| 104 | +- Your local LLM server must use the OpenAI SDK for API compatibility |
| 105 | + |
| 106 | +### Configure the connector |
| 107 | + |
| 108 | +:::::{stepper} |
| 109 | +::::{step} Set up your local LLM server |
| 110 | + |
| 111 | +Ensure your local LLM is running and accessible via an OpenAI-compatible API endpoint. |
| 112 | + |
| 113 | +:::: |
| 114 | + |
| 115 | +::::{step} Create the OpenAI connector |
| 116 | + |
| 117 | +1. Log in to your Elastic deployment |
| 118 | +2. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md) |
| 119 | +3. Select **Create Connector** and select **OpenAI** |
| 120 | +4. Name your connector to help track the model version you're using |
| 121 | +5. Under **Select an OpenAI provider**, select **Other (OpenAI Compatible Service)** |
| 122 | + |
| 123 | +:::: |
| 124 | + |
| 125 | +::::{step} Configure connection details |
| 126 | + |
| 127 | +1. Under **URL**, enter: |
| 128 | + - For Elastic Cloud: Your reverse proxy domain + `/v1/chat/completions` |
| 129 | + - For same-host self-managed: `http://localhost:1234/v1/chat/completions` (adjust port as needed) |
| 130 | +2. Under **Default model**, enter `local-model` |
| 131 | +3. Under **API key**, enter: |
| 132 | + - For Elastic Cloud: Your reverse proxy authentication token |
| 133 | + - For same-host self-managed: Your LLM server's API key |
| 134 | +4. Select **Save** |
| 135 | + |
| 136 | +:::: |
| 137 | + |
| 138 | +::::{step} Set as default (optional) |
| 139 | + |
| 140 | +To use your local model as the default for {{agent-builder}}: |
| 141 | + |
| 142 | +1. Search for **GenAI Settings** in the global search field |
| 143 | +2. Select your local LLM connector from the **Default AI Connector** dropdown |
| 144 | +3. Save your changes |
| 145 | + |
| 146 | +:::: |
| 147 | + |
| 148 | +::::: |
| 149 | + |
| 150 | +## Related pages |
| 151 | + |
| 152 | +- [Limitations and known issues](limitations-known-issues.md): Current limitations around model selection |
| 153 | +- [Get started](get-started.md): Initial setup and configuration |
| 154 | +- [Connectors](/deploy-manage/manage-connectors): Detailed connector configuration guide |
0 commit comments