You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/search/agent-builder/models.md
+2-56Lines changed: 2 additions & 56 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Learn more about the [Elastic Managed LLM connector](kibana://reference/connecto
26
26
27
27
## Change the default model
28
28
29
-
By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, you'll need a configured connector and then set it as the default.
29
+
By default, {{agent-builder}} uses the Elastic Managed LLM. To use a different model, select a configured connector and set it as the default.
30
30
31
31
### Use a pre-configured connector
32
32
@@ -91,61 +91,7 @@ GPT-4o-mini and similar smaller models are not recommended for {{agent-builder}}
91
91
92
92
You can connect a locally hosted LLM to Elastic using the OpenAI connector. This requires your local LLM to be compatible with the OpenAI API format.
93
93
94
-
### Requirements
95
-
96
-
**Model selection:**
97
-
- Download from trusted sources only
98
-
- Consider parameter size, context window, and quantization format for your needs
99
-
- Prefer "instruct" variants over "base" or "chat" versions when multiple variants are available, as instruct models are typically better tuned for following instructions
100
-
101
-
**Integration setup:**
102
-
- For Elastic Cloud: Requires a reverse proxy (such as Nginx) to authenticate requests using a bearer token and forward them to your local LLM endpoint
103
-
- For self-managed deployments on the same host as your LLM: Can connect directly without a reverse proxy
104
-
- Your local LLM server must use the OpenAI SDK for API compatibility
105
-
106
-
### Configure the connector
107
-
108
-
:::::{stepper}
109
-
::::{step} Set up your local LLM server
110
-
111
-
Ensure your local LLM is running and accessible via an OpenAI-compatible API endpoint.
112
-
113
-
::::
114
-
115
-
::::{step} Create the OpenAI connector
116
-
117
-
1. Log in to your Elastic deployment
118
-
2. Find connectors under **Alerts and Insights / Connectors** in the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md)
119
-
3. Select **Create Connector** and select **OpenAI**
120
-
4. Name your connector to help track the model version you're using
121
-
5. Under **Select an OpenAI provider**, select **Other (OpenAI Compatible Service)**
122
-
123
-
::::
124
-
125
-
::::{step} Configure connection details
126
-
127
-
1. Under **URL**, enter:
128
-
- For Elastic Cloud: Your reverse proxy domain + `/v1/chat/completions`
129
-
- For same-host self-managed: `http://localhost:1234/v1/chat/completions` (adjust port as needed)
130
-
2. Under **Default model**, enter `local-model`
131
-
3. Under **API key**, enter:
132
-
- For Elastic Cloud: Your reverse proxy authentication token
133
-
- For same-host self-managed: Your LLM server's API key
134
-
4. Select **Save**
135
-
136
-
::::
137
-
138
-
::::{step} Set as default (optional)
139
-
140
-
To use your local model as the default for {{agent-builder}}:
141
-
142
-
1. Search for **GenAI Settings** in the global search field
143
-
2. Select your local LLM connector from the **Default AI Connector** dropdown
144
-
3. Save your changes
145
-
146
-
::::
147
-
148
-
:::::
94
+
Refer to the [OpenAI connector documentation](kibana://reference/connectors-kibana/openai.md) for detailed setup instructions.
0 commit comments