Skip to content

Commit 8c4b3d5

Browse files
Merge branch 'main' into issue-3721-delete-schedules
2 parents c008cc8 + 6e87460 commit 8c4b3d5

26 files changed

+337
-39
lines changed

.vscode/extensions.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
{
2+
"recommendations": [
3+
"Elastic.elastic-docs-v3-utilities"
4+
]
5+
}

deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -82,14 +82,26 @@ In {{ece}} deployments, a warning is also issued in the ECE `service-constructor
8282

8383
## Restrictions and limitations[ec-autoscaling-restrictions]
8484

85+
### Generic limitations
86+
8587
The following are known limitations and restrictions with autoscaling:
8688

8789
* Autoscaling will not run if the cluster is unhealthy or if the last {{es}} plan failed.
8890

91+
### {{ech}}-specific limitations
92+
8993
In {{ech}} the following additional limitations apply:
9094

9195
* Trial deployments cannot be configured to autoscale beyond the normal Trial deployment size limits. The maximum size per zone is increased automatically from the Trial limit when you convert to a paid subscription.
9296
* ELSER deployments do not scale automatically. For more information, refer to [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) and [Trained model autoscaling](../autoscaling/trained-model-autoscaling.md).
97+
* The maximum configurable node count is 32. This means that the `Maximum size per zone` in autoscaling is limited to the maximum RAM size of the instance configuration in use, multiplied by 32.
98+
99+
For example, for the instance configuration [`aws.es.datahot.c6gd`](cloud://reference/cloud-hosted/aws-default.md), the maximum RAM size is 60GB. Therefore, the `Maximum size per zone` in autoscaling is limited to `60GB x 32 = 1.875TB` (displayed as `1.88TB` in the {{ecloud}} console UI).
100+
101+
For more details, refer to [Restrictions and known problems > Node count and size](../deploy/elastic-cloud/restrictions-known-problems.md#ec-node-count-size).
102+
103+
104+
### {{ece}}-specific limitations
93105

94106
In {{ece}}, the following additional limitations apply:
95107

deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,14 @@ products:
1010
- id: cloud-kubernetes
1111
---
1212

13-
# Running in air-gapped environments [k8s-air-gapped]
13+
# Running ECK in air-gapped environments [k8s-air-gapped]
1414

1515
The ECK operator can be run in an air-gapped environment without access to the open internet when configured to avoid pulling container images from `docker.elastic.co`.
1616

17+
:::{note}
18+
To deploy ECK in Google Distributed Cloud (GDC) air-gapped refer to [Deploy ECK on GDC air-gapped](./eck-gdch.md).
19+
:::
20+
1721
By default ECK does not require you to specify the container image for each {{stack}} application you deploy.
1822

1923
```yaml subs=true
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
applies_to:
3+
deployment:
4+
eck: all
5+
products:
6+
- id: cloud-kubernetes
7+
---
8+
9+
# Deploy ECK on Google Distributed Cloud air-gapped
10+
11+
You can install {{eck}} (ECK) directly from the marketplace available within your [Google Distributed Cloud (GDC) air-gapped](https://cloud.google.com/distributed-cloud/hosted/docs/latest/gdch/overview) environment.
12+
13+
:::{note}
14+
The Elastic Package Registry (EPR) container image is not yet available in GDC and must be deployed manually if you plan to use [integrations](integration-docs://reference/index.md). All other dependencies, such as the Elastic Artifact Registry and Elastic Endpoint Artifact Repository, must also be built and hosted locally as described in the [air-gapped deployment documentation](/deploy-manage/deploy/self-managed/air-gapped-install.md).
15+
16+
For extra guidance on running ECK in isolated environments, refer to [Running ECK in air-gapped environments](/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md).
17+
:::
18+
19+
## Install ECK
20+
21+
To install ECK:
22+
23+
1. Open the **Marketplace** in your GDC console.
24+
2. Search for **Elastic Cloud on Kubernetes (BYOL)**.
25+
3. Click **Install**.
26+
4. Select a **user cluster**, review or adjust the installation parameters, and start the installation.
27+
28+
If you prefer to customize the configuration, refer to the [ECK configuration guide](/deploy-manage/deploy/cloud-on-k8s/configure.md) for details on setting operator parameters in the **Configure the service** page.
29+
30+
Once completed, ECK will be running in your GDC air-gapped environment.
31+
32+
![ECK-GDCH](/deploy-manage/images/eck-gdch.png)
33+
34+
Next, open a terminal with `kubectl` and choose one of the following options:
35+
36+
* [Start a trial](/deploy-manage/license/manage-your-license-in-eck.md#k8s-start-trial) to enable ECK’s enterprise features.
37+
* Continue using ECK in free & basic mode.
38+
* [Apply an Enterprise license](/deploy-manage/license/manage-your-license-in-eck.md#k8s-add-license).
39+
40+
## Next steps
41+
42+
Follow the [ECK Quickstart guide](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md) to deploy {{es}} and {{kib}} for your use case, whether it’s [Observability](/solutions/observability.md), [Security](/solutions/security.md), or [Search](/solutions/search.md).

deploy-manage/deploy/cloud-on-k8s/install.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ ECK supports multiple installation methods. Choose the one that best fits your i
3737
* [Install ECK using a Helm chart](./install-using-helm-chart.md)
3838
* [](./deploy-eck-on-openshift.md)
3939
* [](./deploy-eck-on-gke-autopilot.md)
40+
* [Deploy ECK on Google Distributed Hosted Cloud](./eck-gdch.md)
4041
* [](./deploy-fips-compatible-version-of-eck.md)
4142

4243
For air-gapped environments, refer to [](./air-gapped-install.md) to understand the requirements and installation considerations.

deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@ When using {{ecloud}}, there are some limitations you should be aware of:
2424
* [Restoring a snapshot across deployments](#ec-snapshot-restore-enterprise-search-kibana-across-deployments)
2525
* [Migrate Fleet-managed {{agents}} across deployments by restoring a snapshot](#ec-migrate-elastic-agent)
2626
* [Regions and Availability Zones](#ec-regions-and-availability-zone)
27+
* [Node count and size](#ec-node-count-size)
28+
* [Repository analysis API is unavailable in {{ecloud}}](#ec-repository-analyis-unavailable)
2729

2830
For limitations related to logging and monitoring, check the [Restrictions and limitations](../../monitor/stack-monitoring/ece-ech-stack-monitoring.md#restrictions-monitoring) section of the logging and monitoring page.
2931

@@ -131,6 +133,15 @@ To make a seamless migration, after restoring from a snapshot there are some add
131133
* The AWS `us-west-1` region is limited to two availability zones for ES data nodes and one (tiebreaker only) virtual zone (as depicted by the `-z` in the AZ (`us-west-1z`). Deployment creation with three availability zones for {{es}} data nodes for hot, warm, and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The virtual zone `us-west-1z` can only hold an {{es}} tiebreaker node (no data nodes). The workaround is to use a different AWS US region that allows three availability zones, or to scale existing nodes up within the two availability zones.
132134
* The AWS `eu-central-2` region is limited to two availability zones for CPU Optimized (ARM) Hardware profile ES data node and warm/cold tier. Deployment creation with three availability zones for {{es}} data nodes for hot (for CPU Optimized (ARM) profile), warm and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The workaround is to use a different AWS region that allows three availability zones, or to scale existing nodes up within the two availability zones.
133135

134-
## Repository Analysis API is unavailable in {{ecloud}} [ec-repository-analyis-unavailable]
136+
## Node count and size [ec-node-count-size]
137+
* In the {{ecloud}} console UI, the maximum configurable node count is 32.
138+
The total RAM for `Size per zone` is calculated by multiplying the maximum RAM size of the [instance configuration](cloud://reference/cloud-hosted/hardware.md) in use by 32. For example, for the instance configuration [`aws.es.datahot.c6gd`](cloud://reference/cloud-hosted/aws-default.md), the maximum RAM size is 60GB. Therefore, the total RAM for `Size per zone` is `60GB x 32 = 1.875TB` (displayed as `1.88TB` in the {{ecloud}} console UI).
139+
140+
This maximum node count limitation applies to the UI and affects both the maximum `Size per zone` during manual scaling and the `Maximum size per zone` in autoscaling. This limit is in place to prevent users from inadvertently deploying excessive capacity.
141+
142+
This limitation does not apply when using the API for manual scaling or autoscaling. If you require additional capacity, you can use the [Elastic Cloud API](cloud://reference/cloud-hosted/ec-api-restful.md) to scale up or configure the maximum size for autoscaling, in a self-sufficient way. Refer to the [Update a deployment](cloud://reference/cloud-hosted/ec-api-deployment-crud.md#ec_update_a_deployment) example to learn how to make a deployment update request using the API.
143+
* Apart from the maximum node count configurable in the {{ecloud}} console UI, there are other service limits based on each instance configuration. These service limits are typically greater than 32. For more details, please [contact Elastic support for assistance](/troubleshoot/index.md).
144+
145+
## Repository analysis API is unavailable in {{ecloud}} [ec-repository-analyis-unavailable]
135146

136147
* The {{es}} [Repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) is not available in {{ecloud}} due to deployments defaulting to having [operator privileges](../../users-roles/cluster-or-deployment-auth/operator-privileges.md) enabled that prevent non-operator privileged users from using it along with a number of other APIs.

deploy-manage/images/eck-gdch.png

399 KB
Loading

deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ products:
1414
# Collect monitoring data with Metricbeat [monitoring-metricbeat]
1515

1616

17-
Yu can use {{metricbeat}} to collect data about {{kib}} and ship it to the monitoring cluster.
17+
You can use {{metricbeat}} to collect data about {{kib}} and ship it to the monitoring cluster.
1818

1919
To learn about monitoring in general, refer to [](/deploy-manage/monitor/stack-monitoring.md).
2020

deploy-manage/remote-clusters.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,17 @@ Remote clusters are especially useful in two cases:
2424
In the case of remote clusters, the {{es}} cluster or deployment initiating the connection and requests is often referred to as the **local cluster**, while the {{es}} cluster or deployment receiving the requests is referred to as the **remote cluster**.
2525
::::
2626

27+
## Security models and connection modes
28+
29+
When configuring remote clusters, you can choose between two security models and two connection modes. Both security models are compatible with either connection mode.
30+
31+
- [Security models](./remote-clusters/security-models.md): API key–based authentication (recommended) or TLS certificate–based authentication (deprecated).
32+
- [Connection modes](./remote-clusters/connection-modes.md): Sniff mode (direct connections to {{es}} nodes) or proxy mode (connections through a reverse proxy or load balancer endpoint).
33+
34+
::::{note}
35+
In managed or orchestrated environments, such as {{ech}}, {{ece}}, and {{eck}}, you can select the security model, but the connection mode is effectively limited to *proxy*. This is because sniff mode requires {{es}} nodes publish addresses to be directly reachable across clusters, which is generally not practical in containerized deployments.
36+
::::
37+
2738
## Setup
2839

2940
Depending on the environment the local and remote clusters are deployed on and the security model you wish to use, the exact details needed to add a remote cluster vary but generally follow the same path:
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
---
2+
navigation_title: Connection modes
3+
applies_to:
4+
stack: ga
5+
serverless: unavailable
6+
products:
7+
- id: elasticsearch
8+
---
9+
# Remote cluster connection modes
10+
11+
When you configure a remote cluster, the local cluster needs a way to connect to the nodes of the remote cluster. {{es}} supports two connection modes to handle different network architectures:
12+
13+
- **Proxy mode**: The local cluster connects through a reverse proxy or load balancer, which forwards traffic to the appropriate nodes in the remote cluster. You can configure this mode using either the {{kib}} UI or the {{es}} API.
14+
- **Sniff mode**: The local cluster discovers the remote cluster’s gateway nodes and connects to them directly. This mode can only be configured using the {{es}} API.
15+
16+
::::{note}
17+
Connection modes work independently of [security models](./security-models.md). Both connection modes are compatible with either security model.
18+
::::
19+
20+
The choice between proxy and sniff mode depends on your network architecture and deployment type.
21+
22+
- **Self-managed clusters:** If direct connections on the publish addresses between {{es}} nodes in both clusters are possible, you can use sniff mode. If direct connectivity is difficult to implement—for example, when clusters are separated by NAT, firewalls, or containerized environments—you can place a reverse proxy or load balancer in front of the remote cluster and use proxy mode instead.
23+
24+
- **Managed environments ({{ece}}, {{ech}}, {{eck}}):** Direct node-to-node connectivity is generally not feasible, so these deployments always rely on the proxy connection mode.
25+
26+
The following sections describe each method in more detail.
27+
28+
## Proxy mode
29+
30+
In proxy mode, a cluster alias is registered with a name of your choosing and the address of a TCP (layer 4) reverse proxy specified with the `cluster.remote.<cluster_alias>.proxy_address` setting. You must configure this proxy to route connections to one or more nodes of the remote cluster. The service port to forward traffic to depends on the [security model](./security-models.md) in use, as each model uses a different service port.
31+
32+
When you register a remote cluster using proxy mode, {{es}} opens several TCP connections to the proxy address and uses these connections to communicate with the remote cluster. In proxy mode, {{es}} disregards the publish addresses of the remote cluster nodes, which means that the publish addresses of the remote cluster nodes do not need to be accessible to the local cluster.
33+
34+
Proxy mode is not the default connection mode when adding remotes using the {{es}} API, so you must set `cluster.remote.<cluster_alias>.mode: proxy` to use it. Refer to [Proxy mode remote cluster settings](elasticsearch://reference/elasticsearch/configuration-reference/remote-clusters.md#remote-cluster-proxy-settings) for more information about configuring proxy mode.
35+
36+
::::{note}
37+
Remote clusters configured through **{{kib}}** support only proxy mode. You can’t select a connection mode or configure sniff mode from the UI.
38+
::::
39+
40+
## Sniff mode
41+
42+
In sniff mode, a cluster alias is registered with a name of your choosing and a list of addresses of *seed* nodes specified with the `cluster.remote.<cluster_alias>.seeds` setting. When you register a remote cluster using sniff mode, {{es}} retrieves from one of the seed nodes the addresses of up to three *gateway nodes*. Each `remote_cluster_client` node in the local {{es}} cluster then opens several TCP connections to the publish addresses of the gateway nodes. This mode therefore requires that the gateway nodes' publish addresses are accessible to nodes in the local cluster.
43+
44+
Sniff mode is the default connection mode when adding a remote cluster through the {{es}} API. Refer to [Sniff mode remote cluster settings](elasticsearch://reference/elasticsearch/configuration-reference/remote-clusters.md#remote-cluster-sniff-settings) for more information about configuring sniff mode.

0 commit comments

Comments
 (0)