Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 24 additions & 4 deletions solutions/observability/observability-ai-assistant.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,12 @@ The AI Assistant connects to one of these supported LLM providers:

## Add data to the AI Assistant knowledge base [obs-ai-add-data]

The AI Assistant uses [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md), Elastic’s semantic search engine, to recall data from its internal knowledge base index to create retrieval augmented generation (RAG) responses. Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.
The AI Assistant uses one of the following text embedding models to run semantic search against the internal knowledge base index. The top results are passed to the LLM as context (retrieval‑augmented generation), producing more accurate and grounded responses:

* [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md): Recommended for English-only use cases.
* [E5](/explore-analyze/machine-learning/nlp/ml-nlp-e5.md): {applies_to}`stack: ga 9.1` Recommended for non-English use cases.

Adding data such as Runbooks, GitHub issues, internal documentation, and Slack messages to the knowledge base gives the AI Assistant context to provide more specific assistance.

Add data to the knowledge base with one or more of the following methods:

Expand Down Expand Up @@ -167,8 +172,8 @@ Field names in custom indices have no specific requirements. Any `semantic_text`
- Create a connector using the [Connector APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-connector)

2. **Create embeddings** (choose one method):
- [`semantic_text` field](#obs-ai-search-connectors-semantic-text): Recommended workflow which handles model setup automatically
- [ML pipeline](#obs-ai-search-connectors-ml-embeddings): Requires manual setup of the ELSER model and inference pipeline
- [`semantic_text` field](#obs-ai-search-connectors-semantic-text): Recommended workflow which handles model setup automatically. Allows the use of any available ML model (Elser, e5, or custom models).
- [ML pipeline](#obs-ai-search-connectors-ml-embeddings): Requires manual setup of the ELSER model and inference pipeline.

#### Option 1: Use a `semantic_text` field type to create embeddings (recommended) [obs-ai-search-connectors-semantic-text]

Expand Down Expand Up @@ -210,7 +215,7 @@ After creating the pipeline, complete the following steps:

Ask something to the AI Assistant related with the indexed data.

### Add user-specific system prompts
### Add user-specific system prompts [obs-ai-assistant-user-prompt]

User-specific prompts customize how the AI assistant responds by appending personalized instructions to built-in system prompts. For example, you could specify "Always respond in French," and all subsequent responses will be in French.

Expand All @@ -235,6 +240,21 @@ If asked about a Kubernetes pod, namespace, cluster, location, or owner, return
</kubernetes_info>
```

### Choose the Knowledge Base language model
```{applies_to}
stack: ga 9.1
```
Choose the default language model for the AI Assistant in the AI Assistant settings under **Set text embeddings model**.

* [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md): recommended for English-only use cases.
* [E5](/explore-analyze/machine-learning/nlp/ml-nlp-e5.md): supports multilingual use cases.

Select the language model and click **Update**.

When switching models, all existing Knowledge Base entries must be reindexed. Entries will be unavailable until reindexing is complete.

To have the AI Assistant respond in a language other than English, set a [user specific prompt](#obs-ai-assistant-user-prompt).

## Interact with the AI Assistant [obs-ai-interact]

::::{important}
Expand Down
Loading