|
1 | 1 | --- |
2 | | -navigation_title: "Use different models" |
| 2 | +navigation_title: "Models" |
3 | 3 | applies_to: |
4 | 4 | stack: preview 9.2 |
5 | 5 | serverless: |
6 | 6 | elasticsearch: preview |
| 7 | + observability: unavailable |
| 8 | + security: unavailable |
7 | 9 | --- |
8 | 10 |
|
9 | | -:::{warning} |
10 | | -These pages are currently hidden from the docs TOC and have `noindexed` meta headers. |
| 11 | +# Using different models in {{agent-builder}} |
11 | 12 |
|
12 | | -**Go to the docs [landing page](/solutions/search/elastic-agent-builder.md).** |
| 13 | +{{agent-builder}} uses large language models (LLMs) to power agent reasoning and decision-making. By default, agents use the Elastic Managed LLM, but you can configure other models through Kibana connectors. |
| 14 | + |
| 15 | +## Default model configuration |
| 16 | + |
| 17 | +By default, {{agent-builder}} uses the Elastic Managed LLM connector running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`. |
| 18 | + |
| 19 | +This managed service requires zero setup and no additional API key management. |
| 20 | + |
| 21 | +Learn more about the [Elastic Managed LLM connector](kibana://reference/connectors-kibana/elastic-managed-llm.md) and [pricing](https://www.elastic.co/pricing). |
| 22 | + |
| 23 | +## Change the default model |
| 24 | + |
| 25 | +By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, select a configured connector and set it as the default. |
| 26 | + |
| 27 | +### Use a pre-configured connector |
| 28 | + |
| 29 | +1. Search for **GenAI Settings** in the global search field |
| 30 | +2. Select your preferred connector from the **Default AI Connector** dropdown |
| 31 | +3. Save your changes |
| 32 | + |
| 33 | +### Create a new connector in the UI |
| 34 | + |
| 35 | +1. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md) |
| 36 | +2. Select **Create Connector** and select your model provider |
| 37 | +3. Configure the connector with your API credentials and preferred model |
| 38 | +4. Search for **GenAI Settings** in the global search field |
| 39 | +5. Select your new connector from the **Default AI Connector** dropdown under **Custom connectors** |
| 40 | +6. Save your changes |
| 41 | + |
| 42 | +For detailed instructions on creating connectors, refer to [Connectors](https://www.elastic.co/docs/deploy-manage/manage-connectors). |
| 43 | + |
| 44 | +Learn more about [preconfigured connectors](https://www.elastic.co/docs/reference/kibana/connectors-kibana/pre-configured-connectors). |
| 45 | + |
| 46 | +#### Connect a local LLM |
| 47 | + |
| 48 | +You can connect a locally hosted LLM to Elastic using the OpenAI connector. This requires your local LLM to be compatible with the OpenAI API format. |
| 49 | + |
| 50 | +Refer to the [OpenAI connector documentation](kibana://reference/connectors-kibana/openai-action-type.md) for detailed setup instructions. |
| 51 | + |
| 52 | +## Connectors API |
| 53 | + |
| 54 | +For programmatic access to connector management, refer to the [Connectors API documentation]({{kib-serverless-apis}}group/endpoint-connectors). |
| 55 | + |
| 56 | +## Recommended models |
| 57 | + |
| 58 | +{{agent-builder}} requires models with strong reasoning and tool-calling capabilities. State-of-the-art models perform significantly better than smaller or older models. |
| 59 | + |
| 60 | +The following models are known to work well with {{agent-builder}}: |
| 61 | + |
| 62 | +- **OpenAI**: GPT-4.1, GPT-4o |
| 63 | +- **Anthropic**: Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7 |
| 64 | +- **Google**: Gemini 2.5 Pro |
| 65 | + |
| 66 | +### Why model quality matters |
| 67 | + |
| 68 | +Agent Builder relies on advanced LLM capabilities including: |
| 69 | + |
| 70 | +- **Function calling**: Models must accurately select appropriate tools and construct valid parameters from natural language requests |
| 71 | +- **Multi-step reasoning**: Agents need to plan, execute, and adapt based on tool results across multiple iterations |
| 72 | +- **Structured output**: Models must produce properly formatted responses that the agent framework can parse |
| 73 | + |
| 74 | +Smaller or less capable models may produce errors like: |
| 75 | + |
| 76 | +```console-response |
| 77 | +Error: Invalid function call syntax |
| 78 | +``` |
| 79 | + |
| 80 | +```console-response |
| 81 | +Error executing agent: No tool calls found in the response. |
| 82 | +``` |
| 83 | + |
| 84 | +While any chat-completion-compatible connector can technically be configured, we strongly recommend using state-of-the-art models for reliable agent performance. |
| 85 | + |
| 86 | +:::{note} |
| 87 | +GPT-4o-mini and similar smaller models are not recommended for {{agent-builder}} as they lack the necessary capabilities for reliable agent workflows. |
13 | 88 | ::: |
14 | 89 |
|
15 | | -# Using different models in {{agent-builder}} |
| 90 | +## Related resources |
| 91 | + |
| 92 | +- [Limitations and known issues](limitations-known-issues.md): Current limitations around model selection |
| 93 | +- [Get started](get-started.md): Initial setup and configuration |
| 94 | +- [Connectors](/deploy-manage/manage-connectors.md): Detailed connector configuration guide |
0 commit comments