You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Renaming search connectors to content connectors (#1974)
This PR renames the search connectors to content connectors in the
elasicsearch repository, based on:
#1165
Connected item in elasticsearch repo:
elastic/elasticsearch#130309
Copy file name to clipboardExpand all lines: deploy-manage/manage-connectors.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ products:
15
15
Connectors serve as a central place to store connection information for both Elastic and third-party systems. They enable the linking of actions to rules, which execute as background tasks on the {{kib}} server when rule conditions are met. This allows rules to route actions to various destinations such as log files, ticketing systems, and messaging tools. Different {{kib}} apps may have their own rule types, but they typically share connectors. The **{{stack-manage-app}} > {{connectors-ui}}** provides a central location to view and manage all connectors in the current space.
16
16
17
17
::::{note}
18
-
This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for Search connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md).
18
+
This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for content connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md).
Copy file name to clipboardExpand all lines: solutions/observability/observability-ai-assistant.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ The AI assistant requires the following:
51
51
* The knowledge base requires a 4 GB {{ml}} node.
52
52
- In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs.
53
53
54
-
* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base.
54
+
* A self-deployed connector service if [content connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base.
55
55
56
56
## Your data and the AI Assistant [data-information]
57
57
@@ -107,7 +107,7 @@ The AI Assistant uses [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser
107
107
Add data to the knowledge base with one or more of the following methods:
108
108
109
109
*[Use the knowledge base UI](#obs-ai-kb-ui) available at [AI Assistant Settings](#obs-ai-settings) page.
You can also add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.
113
113
@@ -131,9 +131,9 @@ To add external data to the knowledge base in {{kib}}:
131
131
}
132
132
```
133
133
134
-
### Use search connectors [obs-ai-search-connectors]
134
+
### Use content connectors [obs-ai-search-connectors]
135
135
136
-
[Search connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses.
136
+
[Content connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses.
137
137
138
138
#### Requirements and limitations
139
139
@@ -190,7 +190,7 @@ This is a more complex method that requires you to set up the ELSER model and in
190
190
191
191
To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an **ML Inference Pipeline**:
192
192
193
-
1. Open the previously created search connector in **Content / Connectors**, and select the **Pipelines** tab.
193
+
1. Open the previously created content connector in **Content / Connectors**, and select the **Pipelines** tab.
194
194
2. Select **Copy and customize** under `Unlock your custom pipelines`.
195
195
3. Select **Add Inference Pipeline** under `Machine Learning Inference Pipelines`.
196
196
4. Select the **ELSER (Elastic Learned Sparse EncodeR)** ML model to add the necessary embeddings to the data.
@@ -397,7 +397,7 @@ The AI Assistant Settings page contains the following tabs:
397
397
398
398
***Settings**: Configures the main AI Assistant settings, which are explained directly within the interface.
399
399
***Knowledge base**: Manages [knowledge base entries](#obs-ai-kb-ui).
400
-
***Search Connectors**: Provides a link to {{kib}} **Search** → **Content** → **Connectors** UI for connectors configuration.
400
+
***Content connectors**: Provides a link to {{kib}} **Search** → **Content** → **Connectors** UI for connectors configuration.
Copy file name to clipboardExpand all lines: solutions/search/ingest-for-search.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ You can use these specialized tools to add general content to {{es}} indices.
43
43
| Method | Description | Notes |
44
44
|--------|-------------|-------|
45
45
|[**Web crawler**](https://github.com/elastic/crawler)| Programmatically discover and index content from websites and knowledge bases | Crawl public-facing web content or internal sites accessible via HTTP proxy |
46
-
|[**Search connectors**](https://github.com/elastic/connectors)| Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework|
46
+
|[**Content connectors**](https://github.com/elastic/connectors)| Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework|
47
47
|[**File upload**](/manage-data/ingest/upload-data-files.md)| One-off manual uploads through the UI | Useful for testing or very small-scale use cases, but not recommended for production workflows |
Copy file name to clipboardExpand all lines: solutions/search/search-pipelines.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ These tools can be particularly helpful by providing a layer of customization an
41
41
42
42
It can be a lot of work to set up and manage production-ready pipelines from scratch. Considerations such as error handling, conditional execution, sequencing, versioning, and modularization must all be taken into account.
43
43
44
-
To this end, when you create indices for search use cases, (including web crawler, search connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search.
44
+
To this end, when you create indices for search use cases, (including web crawler, content connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search.
45
45
46
46
This pipeline is called `search-default-ingestion`. While it is a "managed" pipeline (meaning it should not be tampered with), you can view its details via the Kibana UI or the Elasticsearch API. You can also [read more about its contents below](#ingest-pipeline-search-details-generic-reference).
0 commit comments