Skip to content

Commit fd9a841

Browse files
florent-leborgnenastasha-solomonshainaraskascolleenmcginnis
authored
[E&A][Troubleshoot][Reference] Update nav instructions for 9.2 data management menu (#3319)
This PR updates references to pages that are moving from Stack Management to Data Management in 9.2 for the following folders of the docs-content repo: - explore-analyze - troubleshoot - reference I also checked the following and there are no occurrences to fix there: - cloud-account - get-started Separate PRs will be made by their owners for /solutions, /deploy-manage, and /manage-data Closes: #3274 Closes: #3271 Closes: #3275 Rel: #3276 --------- Co-authored-by: Nastasha Solomon <[email protected]> Co-authored-by: shainaraskas <[email protected]> Co-authored-by: Colleen McGinnis <[email protected]>
1 parent 860444d commit fd9a841

20 files changed

+35
-34
lines changed

explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Here, you’ll be able to:
106106

107107
Inference processors added to your index-specific ML {{infer}} pipelines are normal Elasticsearch pipelines. Once created, each processor will have options to **View in Stack Management** and **Delete Pipeline**. Deleting an {{infer}} processor from within the **Content** UI deletes the pipeline and also removes its reference from your index-specific ML {{infer}} pipeline.
108108

109-
These pipelines can also be viewed, edited, and deleted in Kibana via **Stack Management → Ingest Pipelines**, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
109+
These pipelines can also be viewed, edited, and deleted in Kibana from the **Ingest Pipelines** management page, just like all other Elasticsearch ingest pipelines. You may also use the [Ingest pipeline APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). If you delete any of these pipelines outside of the **Content** UI in Kibana, make sure to edit the ML {{infer}} pipelines that reference them.
110110

111111
## Test your ML {{infer}} pipeline [ingest-pipeline-search-inference-test-inference-pipeline]
112112

explore-analyze/machine-learning/nlp/ml-nlp-inference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ After you [deploy a trained model in your cluster](ml-nlp-deploy-models.md), you
1919

2020
## Add an {{infer}} processor to an ingest pipeline [ml-nlp-inference-processor]
2121

22-
In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **Ingest Pipelines**. To open **Ingest Pipelines**, find **{{stack-manage-app}}** in the main menu, or use the [global search field](../../find-and-organize/find-apps-and-objects.md).
22+
In {{kib}}, you can create and edit pipelines from the **Ingest Pipelines** management page. You can find this page in the main menu or using the [global search field](../../find-and-organize/find-apps-and-objects.md).
2323

2424
:::{image} /explore-analyze/images/machine-learning-ml-nlp-pipeline-lang.png
25-
:alt: Creating a pipeline in the Stack Management app
25+
:alt: Creating a pipeline
2626
:screenshot:
2727
:::
2828

explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.",
117117

118118
You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.
119119

120-
Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
120+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
121121

122122
```js
123123
PUT _ingest/pipeline/ner

explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa
116116

117117
Process the initial data with an [{{infer}} processor](elasticsearch://reference/enrich-processor/inference-processor.md). It adds an embedding for each passage. For this, create a text embedding ingest pipeline and then reindex the initial data with this pipeline.
118118

119-
Now create an ingest pipeline either in the [{{stack-manage-app}} UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
119+
Now create an ingest pipeline either from the [Ingest Pipelines](ml-nlp-inference.md#ml-nlp-inference-processor) management page in {{kib}} or by using the API:
120120

121121
```js
122122
PUT _ingest/pipeline/text-embeddings

explore-analyze/machine-learning/setting-up-machine-learning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Assigning security privileges affects how users access {{ml-features}}. Consider
3737

3838
You can configure these privileges
3939

40-
* under **Security**. To open Security, find **{{stack-manage-app}}** in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md).
40+
* under the **Roles** and **Spaces** management pages. Find these pages in the main menu or use the [global search field](../find-and-organize/find-apps-and-objects.md).
4141
* via the respective {{es}} security APIs.
4242

4343
### {{es}} API user [es-security-privileges]
@@ -68,7 +68,7 @@ Granting `All` or `Read` {{kib}} feature privilege for {{ml-app}} will also gran
6868

6969
#### Feature visibility in Spaces [kib-visibility-spaces]
7070

71-
In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to **{{stack-manage-app}}** > **{{kib}}** > **Spaces** or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate **Spaces** directly.
71+
In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to the **Spaces** management page using the navigation menu or the [global search field](../find-and-organize/find-apps-and-objects.md).
7272

7373
:::{image} /explore-analyze/images/machine-learning-spaces.jpg
7474
:alt: Manage spaces in {{kib}}

explore-analyze/transforms/ecommerce-transforms.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ products:
2323

2424
For example, you might want to group the data by product ID and calculate the total number of sales for each product and its average price. Alternatively, you might want to look at the behavior of individual customers and calculate how much each customer spent in total and how many different categories of products they purchased. Or you might want to take the currencies or geographies into consideration. What are the most interesting ways you can transform and interpret this data?
2525

26-
Go to **Management** > **Stack Management** > **Data** > **Transforms** in {{kib}} and use the wizard to create a {{transform}}:
26+
Go to the **Transforms** management page in {{kib}} using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), then use the wizard to create a {{transform}}:
2727
:::{image} /explore-analyze/images/elasticsearch-reference-ecommerce-pivot1.png
2828
:alt: Creating a simple {{transform}} in {{kib}}
2929
:screenshot:

explore-analyze/transforms/transform-checkpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans
4545

4646
In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](ecs://reference/index.md), you might already have an [`event.ingested`](ecs://reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}.
4747

48-
If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
48+
If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}}'s **Ingest Pipelines** management page. Use a [`set` processor](elasticsearch://reference/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp.
4949

5050
```console
5151
PUT _ingest/pipeline/set_ingest_time

explore-analyze/transforms/transform-limitations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ When running a large number of SLO {{transforms}}, two types of limitations can
132132

133133
#### {{transforms-cap}} can return inaccurate errors that suggest deletion [transforms-inaccurate-errors]
134134

135-
The {{transforms-cap}} API and the {{transforms-cap}} page in {{kib}} (**Stack Management** > **{{transforms-cap}})** may display misleading error messages for {{transforms}} created by service level objectives (SLOs).
135+
The {{transforms-cap}} API and the {{transforms-cap}} management page in {{kib}} may display misleading error messages for {{transforms}} created by service level objectives (SLOs).
136136

137137
The message typically reads:
138138

reference/fleet/data-streams-pipeline-tutorial.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,19 @@ This tutorial explains how to add a custom ingest pipeline to an Elastic Integra
1919

2020
Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents.
2121

22-
1. In {{kib}}, navigate to **Stack Management****Ingest Pipelines****Create pipeline****New pipeline**.
23-
2. Name your pipeline. We’ll call this one, `add_field`.
24-
3. Select **Add a processor**. Fill out the following information:
22+
1. In {{kib}}, go to the **Ingest Pipelines** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
23+
1. **Create pipeline****New pipeline**.
24+
1. Name your pipeline. We’ll call this one, `add_field`.
25+
1. Select **Add a processor**. Fill out the following information:
2526

2627
* Processor: "Set"
2728
* Field: `test`
2829
* Value: `true`
2930

3031
The [Set processor](elasticsearch://reference/enrich-processor/set-processor.md) sets a document field and associates it with the specified value.
3132

32-
4. Click **Add**.
33-
5. Click **Create pipeline**.
33+
1. Click **Add**.
34+
1. Click **Create pipeline**.
3435

3536

3637
## Step 2: Apply your ingest pipeline [data-streams-pipeline-two]
@@ -55,7 +56,7 @@ Most integrations write to multiple data streams. You’ll need to add the custo
5556
1. Find the first data stream you wish to edit and select **Change defaults**. For this tutorial, find the data stream configuration titled, **Collect metrics from System instances**.
5657
2. Scroll to **System CPU metrics** and under **Advanced options** select **Add custom pipeline**.
5758

58-
This will take you to the **Create pipeline** workflow in **Stack management**.
59+
This will take you to the **Create pipeline** workflow.
5960

6061

6162

reference/fleet/data-streams-scenario1.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ This tutorial explains how to apply a custom index lifecycle policy to all of th
2222

2323
## Step 1: Create an index lifecycle policy [data-streams-scenario1-step1]
2424

25-
1. To open **Lifecycle Policies**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
25+
1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
2626
2. Click **Create policy**.
2727

2828
Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize the policy to your liking, and when you’re done, click **Save policy**.
@@ -32,7 +32,7 @@ Name your new policy. For this tutorial, you can use `my-ilm-policy`. Customize
3232

3333
The **Index Templates** view in {{kib}} shows you all of the index templates available to automatically apply settings, mappings, and aliases to indices:
3434

35-
1. To open **Index Management**, find **Stack Management** in the main menu or use the [global search field](/get-started/the-stack.md#kibana-navigation-search).
35+
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
3636
2. Select **Index Templates**.
3737
3. Search for `system` to see all index templates associated with the System integration.
3838
4. Select any `logs-*` index template to view the associated component templates. For example, you can select the `logs-system.application` index template.

0 commit comments

Comments
 (0)