Skip to content

Commit a5340bb

Browse files
Renaming search connectors to content connectors (#1974)
This PR renames the search connectors to content connectors in the elasicsearch repository, based on: #1165 Connected item in elasticsearch repo: elastic/elasticsearch#130309
1 parent 94aaa1f commit a5340bb

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

deploy-manage/manage-connectors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ products:
1515
Connectors serve as a central place to store connection information for both Elastic and third-party systems. They enable the linking of actions to rules, which execute as background tasks on the {{kib}} server when rule conditions are met. This allows rules to route actions to various destinations such as log files, ticketing systems, and messaging tools. Different {{kib}} apps may have their own rule types, but they typically share connectors. The **{{stack-manage-app}} > {{connectors-ui}}** provides a central location to view and manage all connectors in the current space.
1616

1717
::::{note}
18-
This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for Search connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md).
18+
This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for content connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md).
1919

2020
::::
2121

reference/ingestion-tools/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This section contains reference information for ingestion tools, including:
1010
* Logstash Plugins
1111
* Logstash Versioned Plugin Reference
1212
* Elastic Serverless forwarder for AWS
13-
* Search connectors
13+
* Content connectors
1414
* ES for Apache Hadoop
1515
* Elastic in integrations
1616

solutions/observability/observability-ai-assistant.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ The AI assistant requires the following:
5151
* The knowledge base requires a 4 GB {{ml}} node.
5252
- In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs.
5353

54-
* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base.
54+
* A self-deployed connector service if [content connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base.
5555

5656
## Your data and the AI Assistant [data-information]
5757

@@ -107,7 +107,7 @@ The AI Assistant uses [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser
107107
Add data to the knowledge base with one or more of the following methods:
108108

109109
* [Use the knowledge base UI](#obs-ai-kb-ui) available at [AI Assistant Settings](#obs-ai-settings) page.
110-
* [Use search connectors](#obs-ai-search-connectors)
110+
* [Use content connectors](#obs-ai-search-connectors)
111111

112112
You can also add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.
113113

@@ -131,9 +131,9 @@ To add external data to the knowledge base in {{kib}}:
131131
}
132132
```
133133

134-
### Use search connectors [obs-ai-search-connectors]
134+
### Use content connectors [obs-ai-search-connectors]
135135

136-
[Search connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses.
136+
[Content connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses.
137137

138138
#### Requirements and limitations
139139

@@ -190,7 +190,7 @@ This is a more complex method that requires you to set up the ELSER model and in
190190

191191
To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an **ML Inference Pipeline**:
192192

193-
1. Open the previously created search connector in **Content / Connectors**, and select the **Pipelines** tab.
193+
1. Open the previously created content connector in **Content / Connectors**, and select the **Pipelines** tab.
194194
2. Select **Copy and customize** under `Unlock your custom pipelines`.
195195
3. Select **Add Inference Pipeline** under `Machine Learning Inference Pipelines`.
196196
4. Select the **ELSER (Elastic Learned Sparse EncodeR)** ML model to add the necessary embeddings to the data.
@@ -397,7 +397,7 @@ The AI Assistant Settings page contains the following tabs:
397397

398398
* **Settings**: Configures the main AI Assistant settings, which are explained directly within the interface.
399399
* **Knowledge base**: Manages [knowledge base entries](#obs-ai-kb-ui).
400-
* **Search Connectors**: Provides a link to {{kib}} **Search****Content****Connectors** UI for connectors configuration.
400+
* **Content connectors**: Provides a link to {{kib}} **Search****Content****Connectors** UI for connectors configuration.
401401

402402
### Add Elastic documentation [obs-ai-product-documentation]
403403

solutions/search/ingest-for-search.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ You can use these specialized tools to add general content to {{es}} indices.
4343
| Method | Description | Notes |
4444
|--------|-------------|-------|
4545
| [**Web crawler**](https://github.com/elastic/crawler) | Programmatically discover and index content from websites and knowledge bases | Crawl public-facing web content or internal sites accessible via HTTP proxy |
46-
| [**Search connectors**](https://github.com/elastic/connectors) | Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework|
46+
| [**Content connectors**](https://github.com/elastic/connectors) | Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework|
4747
| [**File upload**](/manage-data/ingest/upload-data-files.md)| One-off manual uploads through the UI | Useful for testing or very small-scale use cases, but not recommended for production workflows |
4848

4949
### Process data at ingest time

solutions/search/search-pipelines.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ These tools can be particularly helpful by providing a layer of customization an
4141

4242
It can be a lot of work to set up and manage production-ready pipelines from scratch. Considerations such as error handling, conditional execution, sequencing, versioning, and modularization must all be taken into account.
4343

44-
To this end, when you create indices for search use cases, (including web crawler, search connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search.
44+
To this end, when you create indices for search use cases, (including web crawler, content connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search.
4545

4646
This pipeline is called `search-default-ingestion`. While it is a "managed" pipeline (meaning it should not be tampered with), you can view its details via the Kibana UI or the Elasticsearch API. You can also [read more about its contents below](#ingest-pipeline-search-details-generic-reference).
4747

0 commit comments

Comments
 (0)