Skip to content
8 changes: 4 additions & 4 deletions deploy-manage/api-keys/serverless-project-api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In serverless projects, the following types of API keys exist:
- **Personal** API keys, that you can create to allow external services to access your serverless project on behalf of a user.
- **Managed** API keys, created and managed by {{kib}} to correctly run background tasks.

You can manage your keys in **{{project-settings}} → {{manage-app}} → {{api-keys-app}}**:
You can manage your keys in **{{project-settings}} → {{manage-app}} → API keys**:

:::{image} /deploy-manage/images/serverless-api-key-management.png
:alt: API keys UI
Expand All @@ -24,7 +24,7 @@ You can manage your keys in **{{project-settings}} → {{manage-app}} → {{api-

## Create an API key [api-keys-create-an-api-key]

In **{{api-keys-app}}**, click **Create API key**:
In **API keys**, click **Create API key**:

:::{image} /deploy-manage/images/serverless-create-personal-api-key.png
:alt: Create API key UI
Expand Down Expand Up @@ -77,11 +77,11 @@ For the `role_descriptors` object schema, check out the [`/_security/api_key` en

## Update an API key [api-keys-update-an-api-key]

In **{{api-keys-app}}**, click on the name of the key. You can update only **Restrict privileges** and **Include metadata**.
In **API keys**, click on the name of the key. You can update only **Restrict privileges** and **Include metadata**.


## View and delete API keys [api-keys-view-and-delete-api-keys]

The **{{api-keys-app}}** app lists your API keys, including the name, date created, and status. When API keys expire, the status changes from `Active` to `Expired`.
The **API keys** app lists your API keys, including the name, date created, and status. When API keys expire, the status changes from `Active` to `Expired`.

You can delete API keys individually or in bulk.
6 changes: 3 additions & 3 deletions deploy-manage/autoscaling/autoscaling-deciders.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The estimation of expected additional data is based on past indexing that occurr
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes.


### {{api-examples-title}} [autoscaling-proactive-storage-decider-examples]
### Examples [autoscaling-proactive-storage-decider-examples]

This example puts an autoscaling policy named `my_autoscaling_policy`, overriding the proactive decider’s `forecast_window` to be 10 minutes.

Expand Down Expand Up @@ -143,7 +143,7 @@ Both `num_anomaly_jobs_in_queue` and `num_analytics_jobs_in_queue` are designed
: (Optional, [time value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset.


### {{api-examples-title}} [autoscaling-machine-learning-decider-examples]
### Examples [autoscaling-machine-learning-decider-examples]

This example creates an autoscaling policy named `my_autoscaling_policy` that overrides the default configuration of the {{ml}} decider.

Expand Down Expand Up @@ -198,7 +198,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) `fixed` decider responds w
: (Optional, integer) Number of nodes to use when calculating capacity. Defaults to `1`.


### {{api-examples-title}} [autoscaling-fixed-decider-examples]
### Examples [autoscaling-fixed-decider-examples]

This example puts an autoscaling policy named `my_autoscaling_policy`, enabling and configuring the fixed decider.

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/self-managed/air-gapped-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ WantedBy=default.target

## {{artifact-registry}} [air-gapped-elastic-artifact-registry]

Air-gapped install of the {{artifact-registry}} is necessary in order to enable {{agent}} deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to [Host your own artifact registry for binary downloads](/reference/fleet/air-gapped.md#host-artifact-registry) in the {{fleet}} and {{elastic-agent}} Guide.
Air-gapped install of the {{artifact-registry}} is necessary in order to enable {{agent}} deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to [Host your own artifact registry for binary downloads](/reference/fleet/air-gapped.md#host-artifact-registry) in the {{fleet}} and {{agent}} Guide.

::::{note}
When setting up own web server, such as NGINX, to function as the {{artifact-registry}}, it is recommended not to use TLS as there are, currently, no direct ways to establish certificate trust between {{agents}} and this service.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@

* {{ccs-cap}} and {{ccr}} require the `remote_cluster_client` role.
* {{stack-monitor-app}} and ingest pipelines require the `ingest` role.
* {{fleet}}, the {{security-app}}, and {{transforms}} require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
* {{fleet}}, the {{security-app}}, and transforms require the `transform` role. The `remote_cluster_client` role is also required to use {{ccs}} with these features.
* {{ml-cap}} features, such as {{anomaly-detect}}, require the `ml` role.

::::


As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and {{transform}} nodes.
As the cluster grows and in particular if you have large {{ml}} jobs or {{ctransforms}}, consider separating dedicated master-eligible nodes from dedicated data nodes, {{ml}} nodes, and transform nodes.


## Change the role of a node [change-node-role]
Expand Down Expand Up @@ -82,7 +82,7 @@
* [Ingest node](#node-ingest-node) (`ingest`): Ingest nodes are able to apply an [ingest pipeline](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to not include the `ingest` role from nodes that have the `master` or `data` roles.
* [Remote-eligible node](#remote-node) (`remote_cluster_client`): A node that is eligible to act as a remote client.
* [Machine learning node](#ml-node-role) (`ml`): A node that can run {{ml-features}}. If you want to use {{ml-features}}, there must be at least one {{ml}} node in your cluster. For more information, see [Machine learning settings](../../deploy/self-managed/configure-elasticsearch.md) and [Machine learning in the {{stack}}](/explore-analyze/machine-learning.md).
* [{{transform-cap}} node](#transform-node-role) (`transform`): A node that can perform {{transforms}}. If you want to use {{transforms}}, there must be at least one {{transform}} node in your cluster. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).
* [Transform node](#transform-node-role) (`transform`): A node that can perform transforms. If you want to use transforms, there must be at least one transform node in your cluster. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md) and [*Transforming data*](../../../explore-analyze/transforms.md).

Check notice on line 85 in deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.WordChoice: Consider using 'refer to (if it's a document), view (if it's a UI element)' instead of 'see', unless the term is in the UI.

::::{admonition} Coordinating node
:class: note
Expand Down Expand Up @@ -299,15 +299,15 @@
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{ml}} jobs or {{dfeeds}}. If you use {{ccs}} in your {{anomaly-jobs}}, the `remote_cluster_client` role is also required on all master-eligible nodes. Otherwise, the {{dfeed}} cannot start. See [Remote-eligible node](#remote-node).


### {{transform-cap}} node [transform-node-role]
### Transform node [transform-node-role]

{{transform-cap}} nodes run {{transforms}} and handle {{transform}} API requests. For more information, see [{{transforms-cap}} settings](../../deploy/self-managed/configure-elasticsearch.md).
Transform nodes run transforms and handle transform API requests. For more information, see [Transforms settings](../../deploy/self-managed/configure-elasticsearch.md).

Check notice on line 304 in deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.WordChoice: Consider using 'refer to (if it's a document), view (if it's a UI element)' instead of 'see', unless the term is in the UI.

To create a dedicated {{transform}} node, set:
To create a dedicated transform node, set:

```yaml
node.roles: [ transform, remote_cluster_client ]
```

The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in {{transforms}}. See [Remote-eligible node](#remote-node).
The `remote_cluster_client` role is optional but strongly recommended. Otherwise, {{ccs}} fails when used in transforms. See [Remote-eligible node](#remote-node).

Check notice on line 312 in deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.WordChoice: Consider using 'refer to (if it's a document), view (if it's a UI element)' instead of 'See', unless the term is in the UI.

4 changes: 2 additions & 2 deletions deploy-manage/monitor/stack-monitoring/es-local-exporter.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ products:
:::{include} _snippets/legacy-warning.md
:::

The `local` exporter is the default exporter in {{monitoring}}. It routes data back into the same (local) cluster. In other words, it uses the production cluster as the monitoring cluster. For example:
The `local` exporter is the default exporter in {{monitor-features}}. It routes data back into the same (local) cluster. In other words, it uses the production cluster as the monitoring cluster. For example:

```yaml
xpack.monitoring.exporters.my_local_exporter: <1>
Expand All @@ -27,7 +27,7 @@ This exporter exists to provide a convenient option when hardware is simply not

* All indexing impacts the local cluster and the nodes that hold the monitoring indices' shards.
* Most collectors run on the elected master node. Therefore most indexing occurs with the elected master node as the coordinating node, which is a bad practice.
* Any usage of {{monitoring}} for {{kib}} uses the local cluster’s resources for searches and aggregations, which means that they might not be available for non-monitoring tasks.
* Any usage of {{monitor-features}} for {{kib}} uses the local cluster’s resources for searches and aggregations, which means that they might not be available for non-monitoring tasks.
* If the local cluster goes down, the monitoring cluster has inherently gone down with it (and vice versa), which generally defeats the purpose of monitoring.

For the `local` exporter, all setup occurs only on the elected master node. This means that if you do not see any monitoring templates or ingest pipelines, the elected master node is having issues or it is not configured in the same way. Unlike the `http` exporter, the `local` exporter has the advantage of accessing the monitoring cluster’s up-to-date cluster state. It can therefore always check that the templates and ingest pipelines exist without a performance penalty. If the elected master node encounters errors while trying to create the monitoring resources, it logs errors, ignores that collection, and tries again after the next collection.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ products:
:::{include} _snippets/legacy-warning.md
:::

Collectors, as their name implies, collect things. Each collector runs once for each collection interval to obtain data from the public APIs in {{es}} and {{xpack}} that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the [exporters](es-monitoring-exporters.md) to be sent to the monitoring clusters. Regardless of the number of exporters, each collector only runs once per collection interval.
Collectors, as their name implies, collect things. Each collector runs once for each collection interval to obtain data from the public APIs in {{es}} that it chooses to monitor. When the data collection is finished, the data is handed in bulk to the [exporters](es-monitoring-exporters.md) to be sent to the monitoring clusters. Regardless of the number of exporters, each collector only runs once per collection interval.

There is only one collector per data type gathered. In other words, for any monitoring document that is created, it comes from a single collector rather than being merged from multiple collectors. The {{es}} {{monitor-features}} currently have a few collectors because the goal is to minimize overlap between them for optimal performance.

Expand Down
4 changes: 2 additions & 2 deletions deploy-manage/monitor/stack-monitoring/es-pause-export.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
:::{include} _snippets/legacy-warning.md
:::

To stop generating {{monitoring}} data in {{es}}, disable data collection:
To stop generating monitoring data in {{es}}, disable data collection:

Check notice on line 17 in deploy-manage/monitor/stack-monitoring/es-pause-export.md

View workflow job for this annotation

GitHub Actions / vale

Elastic.WordChoice: Consider using 'deactivate, deselect, hide, turn off' instead of 'disable', unless the term is in the UI.

```yaml
xpack.monitoring.collection.enabled: false
Expand Down Expand Up @@ -43,7 +43,7 @@
1. Disable the named exporter. If the same name as an existing exporter is not used, then this will create a completely new exporter that is completely ignored. This value can be set dynamically by using cluster settings.


::::{note}
::::{note}
Defining a disabled exporter prevents the default exporter from being created.
::::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ To learn about monitoring in general, refer to [](/deploy-manage/monitor/stack-m
For more information, see [Monitoring settings in {{es}}](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md) and [the Cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings).

4. [Install {{metricbeat}}](beats://reference/metricbeat/metricbeat-installation-configuration.md) on the same server as {{kib}}.
5. Enable the {{kib}} {{xpack}} module in {{metricbeat}}.<br>
5. Enable the {{kib}} X-Pack module in {{metricbeat}}.<br>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Enable the {{kib}} X-Pack module in {{metricbeat}}.

I thought maybe this was no longer a thing anymore either, but see it's still mentioned in https://www.elastic.co/docs/reference/beats/metricbeat/metricbeat-module-kibana#_usage_for_stack_monitoring_4

Thanks for incorporating my other suggestions, now it LGTM


For example, to enable the default configuration in the `modules.d` directory, run the following command:

Expand All @@ -82,7 +82,7 @@ To learn about monitoring in general, refer to [](/deploy-manage/monitor/stack-m

For more information, see [Specify which modules to run](beats://reference/metricbeat/configuration-metricbeat.md) and [{{kib}} module](beats://reference/metricbeat/metricbeat-module-kibana.md).

6. Configure the {{kib}} {{xpack}} module in {{metricbeat}}.<br>
6. Configure the {{kib}} X-Pack module in {{metricbeat}}.<br>

The `modules.d/kibana-xpack.yml` file contains the following settings:

Expand Down
4 changes: 2 additions & 2 deletions deploy-manage/remote-clusters/remote-clusters-migrate.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ On the remote cluster:

On the local cluster, stop any persistent tasks that refer to the remote cluster:

* Use the [Stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
* Use the [Stop transforms](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) API to stop any transforms.
* Use the [Close jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-close-job) API to close any anomaly detection jobs.
* Use the [Pause auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-auto-follow-pattern) API to pause any auto-follow {{ccr}}.
* Use the [Pause follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-pause-follow) API to pause any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
Expand Down Expand Up @@ -218,7 +218,7 @@ On the local cluster:

Resume any persistent tasks that you stopped earlier. Tasks should be restarted by the same user or API key that created the task before the migration. Ensure the roles of this user or API key have been updated with the required `remote_indices` or `remote_cluster` privileges. For users, tasks capture the caller’s credentials when started and run in that user’s security context. For API keys, restarting a task will update the task with the updated API key.

* Use the [Start {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
* Use the [Start transform](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform) API to start any transforms.
* Use the [Open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) API to open any anomaly detection jobs.
* Use the [Resume follower](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-follow) API to resume any auto-follow {{ccr}}.
* Use the [Resume auto-follow pattern](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-resume-auto-follow-pattern) API to resume any manual {{ccr}} or existing indices that were created from the auto-follow pattern.
Expand Down
12 changes: 0 additions & 12 deletions docset.yml
Original file line number Diff line number Diff line change
Expand Up @@ -129,10 +129,7 @@ subs:
project-settings: "Project settings"
manage-app: "Management"
index-manage-app: "Index Management"
rules-app: "Rules"
saved-objects-app: "Saved Objects"
api-keys-app: "API keys"
connectors-app: "Connectors"
reports-app: "Reports"
app-search-crawler: "App Search web crawler"
agent: "Elastic Agent"
Expand All @@ -145,7 +142,6 @@ subs:
artifact-registry: "Elastic Artifact Registry"
aws: "AWS"
stack: "Elastic Stack"
xpack: "X-Pack"
es: "Elasticsearch"
kib: "Kibana"
ls: "Logstash"
Expand All @@ -157,7 +153,6 @@ subs:
packetbeat: "Packetbeat"
winlogbeat: "Winlogbeat"
esql: "ES|QL"
elastic-agent: "Elastic Agent"
k8s: "Kubernetes"
security-features: "security features"
operator-feature: "operator privileges feature"
Expand All @@ -178,7 +173,6 @@ subs:
webhook-cm: "Webhook - Case Management"
opsgenie: "Opsgenie"
hive: "TheHive"
monitoring: "X-Pack monitoring"
monitor-features: "monitoring features"
stack-monitor-features: "Elastic Stack monitoring features"
watcher: "Watcher"
Expand Down Expand Up @@ -226,10 +220,6 @@ subs:
watcher-transform: "payload transform"
watcher-transforms: "payload transforms"
watcher-transforms-cap: "Payload transforms"
transform: "transform"
transforms: "transforms"
transform-cap: "Transform"
transforms-cap: "Transforms"
dfanalytics-cap: "Data frame analytics"
dfanalytics: "data frame analytics"
dfanalytics-job: "data frame analytics analytics job"
Expand Down Expand Up @@ -271,8 +261,6 @@ subs:
private-locations: "Private Locations"
esh: "ES-Hadoop"
observability: "Observability"
api-query-parms-title: "Query parameters"
api-examples-title: "Examples"
agent-pull: "https://github.com/elastic/elastic-agent/pull/"
es-pull: "https://github.com/elastic/elasticsearch/pull/"
kib-pull: "https://github.com/elastic/kibana/pull/"
Expand Down
4 changes: 2 additions & 2 deletions explore-analyze/alerts-cases/alerts.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ products:

# Alerts

## {{rules-app}} [rules]
## {{rules-ui}} [rules]

In general, a rule consists of three parts:

Expand Down Expand Up @@ -56,7 +56,7 @@ When defining actions in a rule, you specify:
* An action frequency
* A mapping of rule values to properties exposed for that type of action

Each action uses a connector, which provides connection information for a {{kib}} service or third party integration, depending on where you want to send the notifications. The specific list of connectors that you can use in your rule vary by project type. Refer to [{{connectors-app}}](../../deploy-manage/manage-connectors.md).
Each action uses a connector, which provides connection information for a {{kib}} service or third party integration, depending on where you want to send the notifications. The specific list of connectors that you can use in your rule vary by project type. Refer to [{{connectors-ui}}](../../deploy-manage/manage-connectors.md).

After you select a connector, set the *action frequency*. If you want to reduce the number of notifications you receive without affecting their timeliness, some rule types support alert summaries. For example, if you create an {{es}} query rule, you can set the action frequency such that you receive summaries of the new, ongoing, and recovered alerts on a custom interval:

Expand Down
Loading
Loading