diff --git a/solutions/_snippets/connect-local-llm-to-playground.md b/solutions/_snippets/connect-local-llm-to-playground.md index 26fb0c694c..d7f91a2c06 100644 --- a/solutions/_snippets/connect-local-llm-to-playground.md +++ b/solutions/_snippets/connect-local-llm-to-playground.md @@ -1,6 +1,6 @@ Create a connector using the public URL from ngrok. -1. In Kibana, go to **Search > Playground**, and click **Connect to an LLM**. +1. In Kibana, go to **Playground** from the left navigation menu, and select the wrench button (🔧) in the **Large Language Model (LLM)** tile to connect an LLM. 2. Select **OpenAI** on the fly-out. 3. Provide a name for the connector. 4. Under **Connector settings**, select **Other (OpenAI Compatible Service)** as the OpenAI provider. diff --git a/solutions/images/elasticsearch-openai-compatible-connector.png b/solutions/images/elasticsearch-openai-compatible-connector.png index 6975f7aab8..c03adaa88e 100644 Binary files a/solutions/images/elasticsearch-openai-compatible-connector.png and b/solutions/images/elasticsearch-openai-compatible-connector.png differ diff --git a/solutions/images/elasticsearch-query-rules-ui-home.png b/solutions/images/elasticsearch-query-rules-ui-home.png index e4e7d74f23..7926e6cb33 100644 Binary files a/solutions/images/elasticsearch-query-rules-ui-home.png and b/solutions/images/elasticsearch-query-rules-ui-home.png differ diff --git a/solutions/images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png b/solutions/images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png index 1698ff03a8..118c0bf5cd 100644 Binary files a/solutions/images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png and b/solutions/images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png differ diff --git a/solutions/images/kibana-api-keys-search-bar.png b/solutions/images/kibana-api-keys-search-bar.png index 0200d86b44..b5be75c75c 100644 Binary files a/solutions/images/kibana-api-keys-search-bar.png and b/solutions/images/kibana-api-keys-search-bar.png differ diff --git a/solutions/images/kibana-chat-interface.png b/solutions/images/kibana-chat-interface.png index 8af0c23178..af96727505 100644 Binary files a/solutions/images/kibana-chat-interface.png and b/solutions/images/kibana-chat-interface.png differ diff --git a/solutions/images/kibana-get-started.png b/solutions/images/kibana-get-started.png index c51417af43..36e576e753 100644 Binary files a/solutions/images/kibana-get-started.png and b/solutions/images/kibana-get-started.png differ diff --git a/solutions/images/kibana-manage-deployment.png b/solutions/images/kibana-manage-deployment.png index 179eda6f3b..aca4ed3478 100644 Binary files a/solutions/images/kibana-manage-deployment.png and b/solutions/images/kibana-manage-deployment.png differ diff --git a/solutions/images/kibana-query-interface.png b/solutions/images/kibana-query-interface.png index 496dfd5733..26cfac20cb 100644 Binary files a/solutions/images/kibana-query-interface.png and b/solutions/images/kibana-query-interface.png differ diff --git a/solutions/images/kibana-serverless-create-an-api-key.png b/solutions/images/kibana-serverless-create-an-api-key.png index dd8bf816f1..d4f077e919 100644 Binary files a/solutions/images/kibana-serverless-create-an-api-key.png and b/solutions/images/kibana-serverless-create-an-api-key.png differ diff --git a/solutions/images/kibana-view-python-code.png b/solutions/images/kibana-view-python-code.png new file mode 100644 index 0000000000..db262edc51 Binary files /dev/null and b/solutions/images/kibana-view-python-code.png differ diff --git a/solutions/images/search-quickstart-install-python-client.png b/solutions/images/search-quickstart-install-python-client.png new file mode 100644 index 0000000000..6f7ff555f5 Binary files /dev/null and b/solutions/images/search-quickstart-install-python-client.png differ diff --git a/solutions/images/search-quickstart-view-data-python-keywordsearch.png b/solutions/images/search-quickstart-view-data-python-keywordsearch.png new file mode 100644 index 0000000000..979b93bfe9 Binary files /dev/null and b/solutions/images/search-quickstart-view-data-python-keywordsearch.png differ diff --git a/solutions/images/serverless-discover-semantic.png b/solutions/images/serverless-discover-semantic.png index e243f6476a..495a583417 100644 Binary files a/solutions/images/serverless-discover-semantic.png and b/solutions/images/serverless-discover-semantic.png differ diff --git a/solutions/search/full-text/search-with-synonyms.md b/solutions/search/full-text/search-with-synonyms.md index 0261a3e076..dff0b98e18 100644 --- a/solutions/search/full-text/search-with-synonyms.md +++ b/solutions/search/full-text/search-with-synonyms.md @@ -96,13 +96,13 @@ You can create and manage synonym sets and synonym rules using the {{kib}} user To create a synonym set using the UI: -1. Navigate to **Elasticsearch** > **Synonyms** or use the [global search field](/explore-analyze/query-filter/filtering.md#_finding_your_apps_and_objects). -2. Click **Get started**. +1. Use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find Synonyms, then select **Synonyms / Synonyms** from the results. +2. Select **Get started**. 3. Enter a name for your synonym set. 4. Add your synonym rules in the editor by adding terms to match against: - Add **Equivalent rules** by adding multiple equivalent terms. For example: `ipod, i-pod, i pod` - Add **Explicit rules** by adding multiple terms that map to a single term. For example: `i-pod, i pod => ipod` -5. Click **Save** to save your rules. +5. Select **Save** to save your rules. The UI supports the same synonym rule formats as the file-based approach. Changes made through the UI will automatically reload the associated analyzers. @@ -123,7 +123,7 @@ You can store your synonyms set in a file. Make sure you upload a synonyms set file for all your cluster nodes, to the configuration directory for your {{es}} distribution. If you're using {{ech}}, you can upload synonyms files using [custom bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). -An example synonyms file: +An example of a synonyms file: ```markdown # Blank lines and lines starting with pound are comments. diff --git a/solutions/search/get-started/keyword-search-python.md b/solutions/search/get-started/keyword-search-python.md index 2e5cb9c7cf..2e099e2fb2 100644 --- a/solutions/search/get-started/keyword-search-python.md +++ b/solutions/search/get-started/keyword-search-python.md @@ -36,10 +36,12 @@ To learn about role-based access control, go to [](/deploy-manage/users-roles/cl ::::{step} Create an index An index is a collection of documents uniquely identified by a name or an alias. -To create an index, go to **{{es}} > Home**, select keyword search, and follow the guided workflow. +To create an index: +1. Go to **Index Management** using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +2. Select **Create index**, select **Keyword Search**, and follow the guided workflow. To enable your client to talk to your project, you must also create an API key. -Click **Create API Key** and use the default values, which are sufficient for this quickstart. +Select **Create an API Key** and use the default values, which are sufficient for this quickstart. :::{tip} For more information about indices and API keys, go to [](/manage-data/data-store/index-basics.md) and [](/deploy-manage/api-keys/serverless-project-api-keys.md). @@ -50,7 +52,7 @@ For more information about indices and API keys, go to [](/manage-data/data-stor Select your preferred language in the keyword search workflow. For this quickstart, use Python. -![Client installation step in the keyword search workflow](https://images.contentstack.io/v3/assets/bltefdd0b53724fa2ce/bltbf810f73fd4082fb/67c21c06304ea9790b82ee4d/screenshot-my-index.png) +![Client installation step in the keyword search workflow](/solutions/images/search-quickstart-install-python-client.png) The {{es}} client library is a Python package that is installed with `pip`: @@ -132,7 +134,7 @@ For more details about bulker helpers, refer to [Client helpers](elasticsearch-p You should now be able to see the documents in the guided workflow: -![Viewing data in the guided workflow](https://images.contentstack.io/v3/assets/bltefdd0b53724fa2ce/blt0ac36402cde2a645/67d0a443b8764e72b9e8e1f3/view_docs_in_elasticsearch.png) +![Viewing data in the guided workflow](/solutions/images/search-quickstart-view-data-python-keywordsearch.png) Optionally open [Discover](/explore-analyze/discover.md) from the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to familiarize yourself with this data set. diff --git a/solutions/search/get-started/semantic-search.md b/solutions/search/get-started/semantic-search.md index 9bddbb5074..dcf90f7555 100644 --- a/solutions/search/get-started/semantic-search.md +++ b/solutions/search/get-started/semantic-search.md @@ -42,8 +42,9 @@ This guide uses the [semantic text field type](elasticsearch://reference/elastic An index is a collection of documents uniquely identified by a name or an alias. You can follow the guided index workflow: -- If you're using {{es-serverless}}, go to **{{es}} > Home**, select the semantic search workflow, and click **Create a semantic optimized index**. -- If you're using {{ech}} or running {{es}} locally, go to **{{es}} > Home** and click **Create API index**. Select the semantic search workflow. +- If you're using {{es-serverless}}, {{ech}}, or running {{es}} locally: + 1. Go to **Index Management** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). + 2. Select **Create index**, select **Semantic Search**, and follow the guided workflow. When you complete the workflow, you will have sample data and can skip to the steps related to exploring and searching it. Alternatively, run the following API request in [Console](/explore-analyze/query-filter/tools/console.md): diff --git a/solutions/search/query-rules-ui.md b/solutions/search/query-rules-ui.md index a16683e1da..2502f98c5c 100644 --- a/solutions/search/query-rules-ui.md +++ b/solutions/search/query-rules-ui.md @@ -38,7 +38,7 @@ For full access to the Query Rules UI, you need the following privileges: ## Accessing the Query Rules UI -Go to your deployment and select **Query Rules** from the left navigation menu under **Relevance**. If the option does not appear, contact your administrator to review your privileges. +Go to **Query Rules** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). If the option does not appear, contact your administrator to review your privileges. :::{image} /solutions/images/elasticsearch-query-rules-ui-home.png :alt: Landing page for Query Rules UI. diff --git a/solutions/search/rag/playground-context.md b/solutions/search/rag/playground-context.md index 121f5ddb10..6c94440e6a 100644 --- a/solutions/search/rag/playground-context.md +++ b/solutions/search/rag/playground-context.md @@ -28,7 +28,7 @@ Currently you can only select **one field** to be provided as context to the LLM ## Edit context in UI [playground-context-ui] -Use the **Edit context** button in the Playground UI to adjust the number of documents and fields sent to the LLM. +Use the **Playground context** section in the Playground UI to adjust the number of documents and fields sent to the LLM. If you’re hitting context length limits, try the following: @@ -56,9 +56,9 @@ Refer to the following Python notebooks for examples of how to chunk your docume * [Website content](https://github.com/elastic/elasticsearch-labs/tree/main/notebooks/ingestion-and-chunking/website-chunking-ingest.ipynb) -### Balancing cost/latency and result quality [playground-context-balance] +### Optimizing context for cost and performance [playground-context-balance] -Here are some general recommendations for balancing cost/latency and result quality with different context sizes: +The following recommendations can help you balance cost, latency, and result quality when working with different context sizes: Optimize context length : Determine the optimal context length through empirical testing. Start with a baseline and adjust incrementally to find a balance that optimizes both response quality and system performance. @@ -69,7 +69,6 @@ Implement token pruning for ELSER model * [Optimizing retrieval with ELSER v2](https://www.elastic.co/search-labs/blog/introducing-elser-v2-part-2) * [Improving text expansion performance using token pruning](https://www.elastic.co/search-labs/blog/text-expansion-pruning) - Monitor and adjust : Continuously monitor the effects of context size changes on performance and adjust as necessary. diff --git a/solutions/search/rag/playground-query.md b/solutions/search/rag/playground-query.md index f8fe10643f..cba2b39aeb 100644 --- a/solutions/search/rag/playground-query.md +++ b/solutions/search/rag/playground-query.md @@ -17,7 +17,7 @@ This functionality is in technical preview and may be changed or removed in a fu Once you’ve set up your chat interface, you can start chatting with the model. Playground will automatically generate {{es}} queries based on your questions, and retrieve the most relevant documents from your {{es}} indices. The Playground UI enables you to view and modify these queries. -* Click **View query** to open the visual query editor. +* Select the **Query** tab to open the visual query editor. * Modify the query by selecting fields to query per index. ::::{tip} diff --git a/solutions/search/rag/playground-troubleshooting.md b/solutions/search/rag/playground-troubleshooting.md index d2ddd15959..603252712a 100644 --- a/solutions/search/rag/playground-troubleshooting.md +++ b/solutions/search/rag/playground-troubleshooting.md @@ -23,7 +23,7 @@ Context length error : You’ll need to adjust the size of the context you’re sending to the model. Refer to [Optimize model context](playground-context.md). LLM credentials not working -: Under **Model settings**, use the wrench button (🔧) to edit your GenAI connector settings. +: Under **LLM settings**, use the wrench button (🔧) to edit your GenAI connector settings. Poor answer quality : Check the retrieved documents to see if they are valid. Adjust your {{es}} queries to improve the relevance of the documents retrieved. Refer to [View and modify queries](playground-query.md). diff --git a/solutions/search/rag/playground.md b/solutions/search/rag/playground.md index 4c7f1fa804..0f6ca9979e 100644 --- a/solutions/search/rag/playground.md +++ b/solutions/search/rag/playground.md @@ -61,7 +61,7 @@ Here’s a simpified overview of how Playground works: ## Availability and prerequisites [playground-availability-prerequisites] -For Elastic Cloud and self-managed deployments Playground is available in the **Search** space in {{kib}}, under **Content** > **Playground**. +For Elastic Cloud and self-managed deployments, select **Playground** from the left navigation menu. For Elastic Serverless, Playground is available in your {{es}} project UI. @@ -114,11 +114,11 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On ### Connect to LLM provider [playground-getting-started-connect] -To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector. +To get started with Playground, you need to create a [connector](../../../deploy-manage/manage-connectors.md) for your LLM provider. By default, an Elastic Managed LLM is connected. You can also connect to [locally hosted LLMs](playground.md#playground-local-llms) which are compatible with the OpenAI API, by using the OpenAI connector. -To connect to an LLM provider, follow these steps on the Playground landing page: +To connect to an LLM provider, use the following steps on the Playground landing page: -1. Under **Connect to an LLM**, click **Create connector**. +1. Select **New Playground**. Select the wrench button (🔧) in the **Large Language Model (LLM)** tile to connect an LLM. 2. Select your **LLM provider**. 3. **Name** your connector. 4. Select a **URL endpoint** (or use the default). @@ -170,7 +170,7 @@ We’ve also provided some Jupyter notebooks to easily ingest sample data into { Once you’ve connected to your LLM provider, it’s time to choose the data you want to search. -1. Click **Add data sources**. +1. Select **Add data sources**. 2. Select one or more {{es}} indices. 3. Click **Save and continue** to launch the chat interface. @@ -219,9 +219,9 @@ Learn more about the underlying {{es}} queries used to search your data in [View You can start chatting with your data immediately, but you might want to tweak some defaults first. -You can adjust the following under **Model settings**: +You can adjust the following under **LLM settings**: -* **Model**. The model used for generating responses. +* **AI Connector**. The model used for generating responses. * **Instructions**. Also known as the *system prompt*, these initial instructions and guidelines define the behavior of the model throughout the conversation. Be **clear and specific** for best results. * **Include citations**. A toggle to include citations from the relevant {{es}} documents in responses. @@ -240,12 +240,12 @@ Click **⟳ Clear chat** to clear chat history and start a new conversation. ### View and download Python code [playground-getting-started-view-code] -Use the **View code** button to see the Python code that powers the chat interface. You can integrate it into your own application, modifying as needed. We currently support two implementation options: +Use the **Export** button to see the Python code that powers the chat interface. You can integrate it into your own application, modifying as needed. We currently support two implementation options: * {{es}} Python Client + LLM provider * LangChain + LLM provider -:::{image} /solutions/images/kibana-view-code-button.png +:::{image} /solutions/images/kibana-view-python-code.png :alt: view code button :screenshot: :width: 150px diff --git a/solutions/search/search-connection-details.md b/solutions/search/search-connection-details.md index cbcdb773c5..ec0511860f 100644 --- a/solutions/search/search-connection-details.md +++ b/solutions/search/search-connection-details.md @@ -79,15 +79,16 @@ The **Cloud ID** is also available in the **Connection Details** section. Toggle ### Create an API key [create-an-api-key-serverless] 1. Go to the serverless project’s home page. -2. In the **Connect to Elasticsearch** section, select **Create API key**. +2. Select the settings icon in the **Elasticsearch endpoint** section. +3. On the **API keys** page, select **Create API key**. :::{image} /solutions/images/kibana-serverless-create-an-api-key.png :alt: serverless create an api key :screenshot: ::: -3. Enter the API key details, and select **Create API key**. -4. Copy and securely store the API key, since it won't appear again. +4. Enter the API key details, and select **Create API key**. +5. Copy and securely store the API key, since it won't appear again. ### Test connection [elasticsearch-get-started-test-connection] diff --git a/solutions/search/search-pipelines.md b/solutions/search/search-pipelines.md index 1dafff41f4..772e17b226 100644 --- a/solutions/search/search-pipelines.md +++ b/solutions/search/search-pipelines.md @@ -13,20 +13,19 @@ products: You can manage ingest pipelines through Elasticsearch APIs or Kibana UIs. -The **Pipelines** tab under **Build > Connectors** lets you manage the ingest pipeline used by the connector’s destination index. Here you can view the managed pipeline and adjust its settings. For general pipeline authoring, use **Stack Management > Ingest Pipelines.** +The **Pipelines** tab under **Connectors** lets you manage the ingest pipeline used by the connector’s destination index. Here you can view the managed pipeline and adjust its settings. For general pipeline authoring, go to **Ingest Pipelines** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). ## Find pipelines [ingest-pipeline-search-where] To work with ingest pipelines using these UI tools, open the **Pipelines** tab. To find this tab in the Kibana UI: - -1. Go to **Build > Connectors**. +1. Use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find Connectors, then select **Build / Connectors** from the results. 2. Select the connector you want to work with. For example, `azure-blob-storage`. -3. On the conector’s page, open the **Pipelines** tab. +3. On the connector’s page, open the **Pipelines** tab. 4. From here, you can follow the instructions to create custom pipelines, and set up ML inference pipelines. -The tab is highlighted in this screenshot: +The tab is highlighted in the following screenshot: :::{image} /solutions/images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png :alt: ingest pipeline ent search ui @@ -68,9 +67,16 @@ Aside from the pipeline itself, you have a few configuration options which contr * **Reduce Whitespace** - This controls whether or not consecutive, leading, and trailing whitespace should be removed. This can help to display more content in some search experiences. * **Run ML Inference** - Only available on index-specific pipelines. This controls whether or not the optional `@ml-inference` pipeline will be run. Enabled by default. -For connectors, you can opt in or out per index. These settings are stored in Elasticsearch in the `.elastic-connectors` index, in the document that corresponds to the specific index. These settings can be changed there directly, or through the Kibana UI at **Build > Connectors > Available connectors > > Pipelines > Settings**. +* For connectors, you can opt in or out per index. These settings are stored in Elasticsearch in the `.elastic-connectors` index, in the document that corresponds to the specific index. You can change these settings there directly. Alternatively, you can: + 1. Use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find Connectors, then select **Build / Connectors** from the results. + 2. Choose your connector, then go to **Pipelines > Settings** to make changes. + +* You can also change the deployment-wide defaults. These settings are stored in the Elasticsearch mapping for `.elastic-connectors` in the `_meta` section. You can change these settings there directly. +Alternatively, you can: + 1. Use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md) to find Connectors, then select **Build / Connectors** from the results. + 2. Choose your connector, then go to **Configuration** to make changes. -You can also change the deployment-wide defaults. These settings are stored in the Elasticsearch mapping for `.elastic-connectors` in the `_meta` section. These settings can be changed there directly, or from the Kibana UI at **Build > Connectors > Configuration** page. Changing the deployment-wide defaults will not impact any existing indices, but will only impact any newly created indices defaults. Those defaults will still be able to be overridden by the index-specific settings. +Changing the deployment-wide defaults will not impact any existing indices, but will only impact any newly created indices defaults. Those defaults will still be able to be overridden by the index-specific settings. ### Using the API [ingest-pipeline-search-pipeline-settings-using-the-api] @@ -106,7 +112,7 @@ If the pipeline is not specified, the underscore-prefixed fields will actually b ### `search-default-ingestion` Reference [ingest-pipeline-search-details-generic-reference] -You can access this pipeline with the [Elasticsearch Ingest Pipelines API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline) or via Kibana’s [Stack Management > Ingest Pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md#create-manage-ingest-pipelines) UI. +Access this pipeline with the [Elasticsearch Ingest Pipelines API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-get-pipeline), or go to **Ingest Pipelines** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). ::::{warning} This pipeline is a "managed" pipeline. That means that it is not intended to be edited. Editing/updating this pipeline manually could result in unintended behaviors, or difficulty in upgrading in the future. If you want to make customizations, we recommend you utilize index-specific pipelines (see below), specifically [the `@custom` pipeline](#ingest-pipeline-search-details-specific-custom-reference). @@ -137,7 +143,7 @@ Connectors will automatically add these control flow parameters based on the set ### Index-specific ingest pipelines [ingest-pipeline-search-details-specific] -In the Kibana UI for your index, by clicking on the **Pipelines** tab, then **Copy and customize**, you can quickly generate 3 pipelines which are specific to your index. These 3 pipelines replace `search-default-ingestion` for the index. There is nothing lost in this action, as the `` pipeline is a superset of functionality over the `search-default-ingestion` pipeline. +In the Kibana UI for your index, by selecting the **Pipelines** tab, then **Copy and customize**, you can quickly generate 3 pipelines which are specific to your index. These 3 pipelines replace `search-default-ingestion` for the index. There is nothing lost in this action, as the `` pipeline is a superset of functionality over the `search-default-ingestion` pipeline. ::::{important} The "copy and customize" button is not available at all Elastic subscription levels. Refer to the Elastic subscriptions pages for [Elastic Cloud](https://www.elastic.co/subscriptions/cloud) and [self-managed](https://www.elastic.co/subscriptions) deployments. @@ -182,7 +188,7 @@ Connectors will automatically add these control flow parameters based on the set #### `@ml-inference` Reference [ingest-pipeline-search-details-specific-ml-reference] -This pipeline is empty to start (no processors), but can be added to via the Kibana UI either through the Pipelines tab of your index, or from the **Stack Management > Ingest Pipelines** page. Unlike the `search-default-ingestion` pipeline and the `` pipeline, this pipeline is NOT "managed". +This pipeline is empty to start (no processors), but can be added to via the Kibana UI either through the **Pipelines** tab of your index, or by navigating to **Ingest Pipelines** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Unlike the `search-default-ingestion` pipeline and the `` pipeline, this pipeline is NOT "managed". It’s possible to add one or more ML inference pipelines to an index in the **Pipelines** tab. This pipeline will serve as a container for all of the ML inference pipelines configured for the index. Each ML inference pipeline added to the index is referenced within `@ml-inference` using a `pipeline` processor. @@ -201,7 +207,7 @@ The `monitor_ml` Elasticsearch cluster permission is required in order to manage #### `@custom` Reference [ingest-pipeline-search-details-specific-custom-reference] -This pipeline is empty to start (no processors), but can be added to via the Kibana UI either through the Pipelines tab of your index, or from the **Stack Management > Ingest Pipelines** page. Unlike the `search-default-ingestion` pipeline and the `` pipeline, this pipeline is NOT "managed". +This pipeline is empty to start (no processors), but can be added to via the Kibana UI either through the **Pipelines** tab of your index, or go to **Ingest Pipelines** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Unlike the `search-default-ingestion` pipeline and the `` pipeline, this pipeline is NOT "managed". You are encouraged to make additions and edits to this pipeline, provided its name remains the same. This provides a convenient hook from which to add custom processing and transformations for your data. Be sure to read the [docs for ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md) to see what options are available. diff --git a/solutions/search/vector/bring-own-vectors.md b/solutions/search/vector/bring-own-vectors.md index aecd6cc306..e979459140 100644 --- a/solutions/search/vector/bring-own-vectors.md +++ b/solutions/search/vector/bring-own-vectors.md @@ -132,10 +132,11 @@ POST /amazon-reviews/_search ## Next steps: implementing vector search -If you want to try a similar workflow from an {{es}} client, use the guided index workflow: +If you want to try a similar workflow from an {{es}} client, use the following guided index workflow in {{es}} Serverless, {{ech}}, or a self-managed cluster: -* If you're using {{es}} Serverless, go to **{{es}} > Home**, select the vector search workflow, and **Create a vector optimized index**. -* If you're using {{ech}} or a self-managed cluster, go to **{{es}} > Home** and click **Create API index**. Select the vector search workflow. +* Go to **Index Management** in the management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). +* Select **Create index**. Specify a name and then select **Create my index**. +* Select **Vector Search** option and follow the guided workflow. When you finish your tests and no longer need the sample data set, delete the index: diff --git a/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md b/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md index c7fd50b8b3..91888a5838 100644 --- a/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md +++ b/solutions/search/vector/dense-versus-sparse-ingest-pipelines.md @@ -17,7 +17,7 @@ products: * This tutorial predates the [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) and the [`semantic_text` field type](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md). Today there are simpler, higher-level options for semantic search than the ones outlined in this tutorial. The semantic text workflow is the recommended way to perform semantic search for most use cases. :::: -**This guide shows how to implement semantic search in {{es}} with deployed NLP models: from selecting a model, to configuring ingest pipelines, to running queries.** +This guide shows how to implement semantic search in {{es}} with deployed NLP models: from selecting a model, to configuring ingest pipelines, to running queries. ## Select an NLP model [deployed-select-nlp-model]