Skip to content

Commit 1c56d57

Browse files
committed
[Agent Builder] Add page about models
1 parent 2e29c14 commit 1c56d57

File tree

5 files changed

+170
-2
lines changed

5 files changed

+170
-2
lines changed

solutions/search/agent-builder/get-started.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,12 @@ Learn more in [Agent Chat](chat.md).
7575

7676
::::
7777

78+
::::{step} Configure model (optional)
79+
80+
By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, refer to [model selection and configuration](models.md).
81+
82+
::::
83+
7884
::::{step} Begin building agents and tools
7985

8086
Once you've tested the default **Elastic AI Agent** with the [built-in Elastic tools](tools.md), you can begin [building your own agents](agent-builder-agents.md#create-a-new-agent) with custom instructions and [creating your own tools](tools.md#create-custom-tools) to assign them.

solutions/search/agent-builder/limitations-known-issues.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ While in private technical preview, {{agent-builder}} is not enabled by default.
2222

2323
### Model selection
2424

25-
Initially, {{agent-builder}} only supports working with the [Elastic Managed LLM](kibana://reference/connectors-kibana/elastic-managed-llm.md) running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`.
25+
Initially, {{agent-builder}} defaults to working with the [Elastic Managed LLM](kibana://reference/connectors-kibana/elastic-managed-llm.md) running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`.
2626

27-
Learn about [pricing](https://www.elastic.co/pricing/serverless-search) for the Elastic Managed LLM.
27+
Learn more on the [models page](models.md).
2828

2929
## Known issues
3030

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
---
2+
navigation_title: "Use different models"
3+
applies_to:
4+
stack: preview 9.2
5+
serverless:
6+
elasticsearch: preview
7+
---
8+
9+
:::{warning}
10+
These pages are currently hidden from the docs TOC and have `noindexed` meta headers.
11+
12+
**Go to the docs [landing page](/solutions/search/elastic-agent-builder.md).**
13+
:::
14+
15+
# Using different models in {{agent-builder}}
16+
17+
{{agent-builder}} uses large language models (LLMs) to power agent reasoning and decision-making. By default, agents use the Elastic Managed LLM, but you can configure other models through Kibana connectors.
18+
19+
## Default model configuration
20+
21+
By default, {{agent-builder}} uses the Elastic Managed LLM connector running on the [Elastic Inference Service](/explore-analyze/elastic-inference/eis.md) {applies_to}`serverless: preview` {applies_to}`ess: preview 9.2`.
22+
23+
This managed service requires zero setup and no additional API key management.
24+
25+
Learn more about the [Elastic Managed LLM connector](kibana://reference/connectors-kibana/elastic-managed-llm) and [pricing](https://www.elastic.co/pricing).
26+
27+
## Change the default model
28+
29+
By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, you'll need a configured connector and then set it as the default.
30+
31+
### Use a pre-configured connector
32+
33+
1. Search for **GenAI Settings** in the global search field
34+
2. Select your preferred connector from the **Default AI Connector** dropdown
35+
3. Save your changes
36+
37+
### Create a new connector in the UI
38+
39+
1. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md)
40+
2. Select **Create Connector** and select your model provider
41+
3. Configure the connector with your API credentials and preferred model
42+
4. Search for **GenAI Settings** in the global search field
43+
5. Select your new connector from the **Default AI Connector** dropdown
44+
6. Save your changes
45+
46+
For detailed instructions on creating connectors, refer to [Connectors](https://www.elastic.co/docs/deploy-manage/manage-connectors).
47+
48+
Learn more about [preconfigured connectors](https://www.elastic.co/docs/reference/kibana/connectors-kibana/pre-configured-connectors).
49+
50+
## Connectors API
51+
52+
For programmatic access to connector management, refer to the [Connectors API documentation]({{kib-serverless-apis}}group/endpoint-connectors).
53+
54+
## Recommended models
55+
56+
{{agent-builder}} requires models with strong reasoning and tool-calling capabilities. State-of-the-art models perform significantly better than smaller or older models.
57+
58+
### Recommended model families
59+
60+
The following models are known to work well with {{agent-builder}}:
61+
62+
- **OpenAI**: GPT-4.1, GPT-4o
63+
- **Anthropic**: Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7
64+
- **Google**: Gemini 2.5 Pro
65+
66+
### Why model quality matters
67+
68+
Agent Builder relies on advanced LLM capabilities including:
69+
70+
- **Function calling**: Models must accurately select appropriate tools and construct valid parameters from natural language requests
71+
- **Multi-step reasoning**: Agents need to plan, execute, and adapt based on tool results across multiple iterations
72+
- **Structured output**: Models must produce properly formatted responses that the agent framework can parse
73+
74+
Smaller or less capable models may produce errors like:
75+
76+
```console-response
77+
Error: Invalid function call syntax
78+
```
79+
80+
```console-response
81+
Error executing agent: No tool calls found in the response.
82+
```
83+
84+
While any chat-completion-compatible connector can technically be configured, we strongly recommend using state-of-the-art models for reliable agent performance.
85+
86+
:::{note}
87+
GPT-4o-mini and similar smaller models are not recommended for {{agent-builder}} as they lack the necessary capabilities for reliable agent workflows.
88+
:::
89+
90+
## Connect a local LLM
91+
92+
You can connect a locally hosted LLM to Elastic using the OpenAI connector. This requires your local LLM to be compatible with the OpenAI API format.
93+
94+
### Requirements
95+
96+
**Model selection:**
97+
- Must include "instruct" in the model name to work with Elastic
98+
- Download from trusted sources only
99+
- Consider parameter size, context window, and quantization format for your needs
100+
101+
**Integration setup:**
102+
- For Elastic Cloud: Requires a reverse proxy (such as Nginx) to authenticate requests using a bearer token and forward them to your local LLM endpoint
103+
- For self-managed deployments on the same host as your LLM: Can connect directly without a reverse proxy
104+
- Your local LLM server must use the OpenAI SDK for API compatibility
105+
106+
### Configure the connector
107+
108+
:::::{stepper}
109+
::::{step} Set up your local LLM server
110+
111+
Ensure your local LLM is running and accessible via an OpenAI-compatible API endpoint.
112+
113+
::::
114+
115+
::::{step} Create the OpenAI connector
116+
117+
1. Log in to your Elastic deployment
118+
2. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md)
119+
3. Select **Create Connector** and select **OpenAI**
120+
4. Name your connector to help track the model version you're using
121+
5. Under **Select an OpenAI provider**, select **Other (OpenAI Compatible Service)**
122+
123+
::::
124+
125+
::::{step} Configure connection details
126+
127+
1. Under **URL**, enter:
128+
- For Elastic Cloud: Your reverse proxy domain + `/v1/chat/completions`
129+
- For same-host self-managed: `http://localhost:1234/v1/chat/completions` (adjust port as needed)
130+
2. Under **Default model**, enter `local-model`
131+
3. Under **API key**, enter:
132+
- For Elastic Cloud: Your reverse proxy authentication token
133+
- For same-host self-managed: Your LLM server's API key
134+
4. Select **Save**
135+
136+
::::
137+
138+
::::{step} Set as default (optional)
139+
140+
To use your local model as the default for {{agent-builder}}:
141+
142+
1. Search for **GenAI Settings** in the global search field
143+
2. Select your local LLM connector from the **Default AI Connector** dropdown
144+
3. Save your changes
145+
146+
::::
147+
148+
:::::
149+
150+
## Related pages
151+
152+
- [Limitations and known issues](limitations-known-issues.md): Current limitations around model selection
153+
- [Get started](get-started.md): Initial setup and configuration
154+
- [Connectors](/deploy-manage/manage-connectors): Detailed connector configuration guide

solutions/search/elastic-agent-builder.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,13 @@ To get started you need an Elastic deployment and you must enable the feature.
5454

5555
[**Get started with {{agent-builder}}**](agent-builder/get-started.md)
5656

57+
## Model selection
58+
59+
By default, agents use the Elastic Managed LLM, but you can configure other model providers using connectors, including local LLMs deployed on your infrastructure.
60+
61+
[**Learn more about model selection**](agent-builder/models.md)
62+
63+
5764
## Programmatic interfaces
5865

5966
{{agent-builder}} provides APIs and LLM integration options for programmatic access and automation.

solutions/toc.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@ toc:
4646
- file: search/using-openai-compatible-models.md
4747
- hidden: search/elastic-agent-builder.md
4848
- hidden: search/agent-builder/get-started.md
49+
- hidden: search/agent-builder/models.md
4950
- hidden: search/agent-builder/chat.md
5051
- hidden: search/agent-builder/agent-builder-agents.md
5152
- hidden: search/agent-builder/tools.md

0 commit comments

Comments
 (0)