diff --git a/deploy-manage/manage-connectors.md b/deploy-manage/manage-connectors.md index 293af88aa1..0f9c6d7946 100644 --- a/deploy-manage/manage-connectors.md +++ b/deploy-manage/manage-connectors.md @@ -15,7 +15,7 @@ products: Connectors serve as a central place to store connection information for both Elastic and third-party systems. They enable the linking of actions to rules, which execute as background tasks on the {{kib}} server when rule conditions are met. This allows rules to route actions to various destinations such as log files, ticketing systems, and messaging tools. Different {{kib}} apps may have their own rule types, but they typically share connectors. The **{{stack-manage-app}} > {{connectors-ui}}** provides a central location to view and manage all connectors in the current space. ::::{note} -This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for Search connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md). +This page is about {{kib}} connectors that integrate with services like generative AI model providers. If you’re looking for content connectors that synchronize third-party data into {{es}}, refer to [Connector clients](elasticsearch://reference/search-connectors/index.md). :::: diff --git a/reference/ingestion-tools/index.md b/reference/ingestion-tools/index.md index 46fcdf3e93..f410240a62 100644 --- a/reference/ingestion-tools/index.md +++ b/reference/ingestion-tools/index.md @@ -10,7 +10,7 @@ This section contains reference information for ingestion tools, including: * Logstash Plugins * Logstash Versioned Plugin Reference * Elastic Serverless forwarder for AWS -* Search connectors +* Content connectors * ES for Apache Hadoop * Elastic in integrations diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index 492cd7e385..a8662a5bdb 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -51,7 +51,7 @@ The AI assistant requires the following: * The knowledge base requires a 4 GB {{ml}} node. - In {{ecloud}} or {{ece}}, if you have Machine Learning autoscaling enabled, Machine Learning nodes will be started when using the knowledge base and AI Assistant. Therefore using these features will incur additional costs. -* A self-deployed connector service if [search connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base. +* A self-deployed connector service if [content connectors](elasticsearch://reference/search-connectors/index.md) are used to populate external data into the knowledge base. ## Your data and the AI Assistant [data-information] @@ -107,7 +107,7 @@ The AI Assistant uses [ELSER](/explore-analyze/machine-learning/nlp/ml-nlp-elser Add data to the knowledge base with one or more of the following methods: * [Use the knowledge base UI](#obs-ai-kb-ui) available at [AI Assistant Settings](#obs-ai-settings) page. -* [Use search connectors](#obs-ai-search-connectors) +* [Use content connectors](#obs-ai-search-connectors) You can also add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base. @@ -131,9 +131,9 @@ To add external data to the knowledge base in {{kib}}: } ``` -### Use search connectors [obs-ai-search-connectors] +### Use content connectors [obs-ai-search-connectors] -[Search connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses. +[Content connectors](elasticsearch://reference/search-connectors/index.md) index content from external sources like GitHub, Confluence, Google Drive, Jira, S3, Teams, and Slack to improve the AI Assistant's responses. #### Requirements and limitations @@ -190,7 +190,7 @@ This is a more complex method that requires you to set up the ELSER model and in To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an **ML Inference Pipeline**: -1. Open the previously created search connector in **Content / Connectors**, and select the **Pipelines** tab. +1. Open the previously created content connector in **Content / Connectors**, and select the **Pipelines** tab. 2. Select **Copy and customize** under `Unlock your custom pipelines`. 3. Select **Add Inference Pipeline** under `Machine Learning Inference Pipelines`. 4. Select the **ELSER (Elastic Learned Sparse EncodeR)** ML model to add the necessary embeddings to the data. @@ -397,7 +397,7 @@ The AI Assistant Settings page contains the following tabs: * **Settings**: Configures the main AI Assistant settings, which are explained directly within the interface. * **Knowledge base**: Manages [knowledge base entries](#obs-ai-kb-ui). -* **Search Connectors**: Provides a link to {{kib}} **Search** → **Content** → **Connectors** UI for connectors configuration. +* **Content connectors**: Provides a link to {{kib}} **Search** → **Content** → **Connectors** UI for connectors configuration. ### Add Elastic documentation [obs-ai-product-documentation] diff --git a/solutions/search/ingest-for-search.md b/solutions/search/ingest-for-search.md index 81bb476227..a857076802 100644 --- a/solutions/search/ingest-for-search.md +++ b/solutions/search/ingest-for-search.md @@ -43,7 +43,7 @@ You can use these specialized tools to add general content to {{es}} indices. | Method | Description | Notes | |--------|-------------|-------| | [**Web crawler**](https://github.com/elastic/crawler) | Programmatically discover and index content from websites and knowledge bases | Crawl public-facing web content or internal sites accessible via HTTP proxy | -| [**Search connectors**](https://github.com/elastic/connectors) | Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework| +| [**Content connectors**](https://github.com/elastic/connectors) | Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework| | [**File upload**](/manage-data/ingest/upload-data-files.md)| One-off manual uploads through the UI | Useful for testing or very small-scale use cases, but not recommended for production workflows | ### Process data at ingest time diff --git a/solutions/search/search-pipelines.md b/solutions/search/search-pipelines.md index 0ea177e227..df69a979de 100644 --- a/solutions/search/search-pipelines.md +++ b/solutions/search/search-pipelines.md @@ -41,7 +41,7 @@ These tools can be particularly helpful by providing a layer of customization an It can be a lot of work to set up and manage production-ready pipelines from scratch. Considerations such as error handling, conditional execution, sequencing, versioning, and modularization must all be taken into account. -To this end, when you create indices for search use cases, (including web crawler, search connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search. +To this end, when you create indices for search use cases, (including web crawler, content connectors and API indices), each index already has a pipeline set up with several processors that optimize your content for search. This pipeline is called `search-default-ingestion`. While it is a "managed" pipeline (meaning it should not be tampered with), you can view its details via the Kibana UI or the Elasticsearch API. You can also [read more about its contents below](#ingest-pipeline-search-details-generic-reference).