Skip to content

Commit 9aca972

Browse files
authored
Merge branch 'main' into internal-452-known-issue
2 parents b9d1067 + ea0402e commit 9aca972

File tree

9 files changed

+108
-21
lines changed

9 files changed

+108
-21
lines changed

deploy-manage/deploy/cloud-on-k8s.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,13 @@ ECK is compatible with the following Kubernetes distributions and related techno
7575

7676
::::{tab-set}
7777

78+
:::{tab-item} ECK 3.2
79+
* Kubernetes 1.30-1.34
80+
* OpenShift 4.15-4.19
81+
* Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS)
82+
* Helm: {{eck_helm_minimum_version}}+
83+
:::
84+
7885
:::{tab-item} ECK 3.1
7986
* Kubernetes 1.29-1.33
8087
* OpenShift 4.15-4.19

deploy-manage/deploy/cloud-on-k8s/configure-eck.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,7 @@ data:
9797
enable-leader-election: true
9898
elasticsearch-observation-interval: 10s
9999
ubi-only: false
100+
password-length: 24
100101
```
101102
102103
Alternatively, you can edit the `elastic-operator` StatefulSet and add flags to the `args` section of the operator container — which will trigger an automatic restart of the operator pod by the StatefulSet controller.

deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md

Lines changed: 45 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,47 @@ products:
1212

1313
A [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) (PDB) allows you to limit the disruption to your application when its pods need to be rescheduled for some reason such as upgrades or routine maintenance work on the Kubernetes nodes.
1414

15-
ECK manages a default PDB per {{es}} resource. It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted.
15+
{{eck}} manages either a single default PDB, or multiple PDBs per {{es}} resource depending on the license level of the ECK installation.
1616

17-
In the {{es}} specification, you can change the default behavior as follows:
17+
:::{note}
18+
In {{eck}} 3.1 and earlier, all clusters follow the [default PodDisruptionBudget rules](#default-pdb-rules), regardless of license type.
19+
:::
20+
21+
## Advanced rules (Enterprise license required)
22+
```{applies_to}
23+
deployment:
24+
eck: ga 3.2
25+
```
26+
27+
In Elasticsearch clusters managed by ECK and licensed with an Enterprise license, a separate PDB is created for each type of `nodeSet` defined in the manifest. This setup allows Kubernetes upgrade or maintenance operations to be executed more quickly. Each PDB permits one Elasticsearch Pod per `nodeSet` to be disrupted at a time, provided the Elasticsearch cluster maintains the health status described in the following table:
28+
29+
| Role | Cluster health required | Notes |
30+
|------|------------------------|--------|
31+
| master | Yellow | |
32+
| data | Green | All Data roles are grouped together into a single PDB, except for data_frozen. |
33+
| data_frozen | Yellow | Since frozen data tier nodes only host partially mounted indices backed by searchable snapshots additional disruptions are allowed. |
34+
| ingest | Yellow | |
35+
| ml | Yellow | |
36+
| coordinating | Yellow | |
37+
| transform | Yellow | |
38+
| remote_cluster_client | Yellow | |
39+
40+
Single-node clusters are not considered highly available and can always be disrupted regardless of license type.
41+
42+
## Default rules (Basic license) [default-pdb-rules]
43+
:::{note}
44+
In {{eck}} 3.1 and earlier, all clusters follow this behavior regardless of license type.
45+
:::
46+
47+
In {{eck}} clusters that do not have an Enterprise license, one {{es}} Pod can be taken down at a time, as long as the cluster has a health status of `green`. Single-node clusters are not considered highly available and can always be disrupted.
48+
49+
## Overriding the default behavior
50+
51+
In the {{es}} specification, you can change the default behavior in two ways. By fully overriding the PodDisruptionBudget within the {{es}} spec or by disabling the default PodDisruptionBudget and specifying one or more PodDisruptionBudget(s).
52+
53+
### Specify your own PodDisruptionBudget [k8s-specify-own-pdb]
54+
55+
You can fully override the default PodDisruptionBudget by specifying your own PodDisruptionBudget in the {{es}} spec.
1856

1957
```yaml
2058
apiVersion: elasticsearch.k8s.elastic.co/v1
@@ -34,14 +72,15 @@ spec:
3472
elasticsearch.k8s.elastic.co/cluster-name: quickstart
3573
```
3674
37-
::::{note}
75+
This will cause the ECK operator to only create the PodDisruptionBudget defined in the spec. It will not create any additional PodDisruptionBudgets.
76+
77+
::::{note}
3878
[`maxUnavailable`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors) cannot be used with an arbitrary label selector, therefore `minAvailable` is used in this example.
3979
::::
4080

81+
### Create a PodDisruptionBudget per nodeSet [k8s-pdb-per-nodeset]
4182

42-
## Pod disruption budget per nodeset [k8s-pdb-per-nodeset]
43-
44-
You can specify a PDB per nodeset or node role.
83+
You can specify a PDB per `nodeSet` or node role.
4584

4685
```yaml subs=true
4786
apiVersion: elasticsearch.k8s.elastic.co/v1
@@ -81,6 +120,3 @@ spec:
81120
4. Pod disruption budget applies on all master nodes.
82121
5. Specify pod disruption budget to have 1 hot node available.
83122
6. Pod disruption budget applies on nodes of the same nodeset.
84-
85-
86-

deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -100,11 +100,9 @@ This will update the ECK installation to the latest binary and update the CRDs a
100100

101101
Upgrading the operator results in a one-time update to existing managed resources in the cluster. This potentially triggers a rolling restart of pods by Kubernetes to apply those changes. The following list contains the ECK operator versions that would cause a rolling restart after they have been installed.
102102

103-
```
104-
1.6, 1.9, 2.0, 2.1, 2.2, 2.4, 2.5, 2.6, 2.7, 2.8, 2.14, 3.1 <1>
105-
```
103+
1.6, 1.9, 2.0, 2.1, 2.2, 2.4, 2.5, 2.6, 2.7, 2.8, 2.14, 3.1^1^, 3.2^1^
106104

107-
1. The restart when upgrading to version 3.1 happens only for applications using [stack monitoring](/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md).
105+
^1^ The restart when upgrading to version 3.1 and 3.2 happens only for applications using [stack monitoring](/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md).
108106

109107
::::{note}
110108
Stepping over one of these versions, for example, upgrading ECK from 2.6 to 2.9, still triggers a rolling restart.

deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -121,10 +121,22 @@ Client authentication is enabled by default for the JWT realms. Disabling client
121121
: Indicates that {{es}} should use the `RS256` or `HS256` signature algorithms to verify the signature of the JWT from the JWT issuer.
122122

123123
`pkc_jwkset_path`
124-
: The file name or URL to a JSON Web Key Set (JWKS) with the public key material that the JWT Realm uses for verifying token signatures. A value is considered a file name if it does not begin with `https`. The file name is resolved relative to the {{es}} configuration directory. If a URL is provided, then it must begin with `https://` (`http://` is not supported). {{es}} automatically caches the JWK set and will attempt to refresh the JWK set upon signature verification failure, as this might indicate that the JWT Provider has rotated the signing keys.
124+
: The file name or URL to a JSON Web Key Set (JWKS) with the public key material that the JWT Realm uses for verifying token signatures. A value is considered a file name if it does not begin with `https`. The file name is resolved relative to the {{es}} configuration directory. If a URL is provided, then it must begin with `https://` (`http://` is not supported). {{es}} automatically caches the JWK set and will attempt to refresh the JWK set upon signature verification failure, as this might indicate that the JWT Provider has rotated the signing keys. Background JWKS reloading can also be configured with the setting `pkc_jwkset_reload.enabled`. This ensures that rotated keys are automatically discovered and used to verify JWT signatures.
125+
126+
`pkc_jwkset_reload.enabled` {applies_to}`stack: ga 9.3`
127+
: Indicates whether JWKS background reloading is enabled. Defaults to `false`.
128+
129+
`pkc_jwkset_reload.file_interval` {applies_to}`stack: ga 9.3`
130+
: Specifies the reload interval for file-based JWKS. Defaults to `5m`.
131+
132+
`pkc_jwkset_reload.url_interval_min` {applies_to}`stack: ga 9.3`
133+
: Specifies the minimum reload interval for URL-based JWKS. The `Expires` and `Cache-Control` HTTP response headers inform the reload interval. This configuration setting is the lower bound of what is considered, and it is also the default interval in the absence of useful response headers. Defaults to `1h`.
134+
135+
`pkc_jwkset_reload.url_interval_max` {applies_to}`stack: ga 9.3`
136+
: Specifies the maximum reload interval for URL-based JWKS. This configuration setting is the upper bound of what is considered from header responses (`5d`).
125137

126138
`claims.principal`
127-
: The name of the JWT claim that contains the user’s principal (username).
139+
: The name of the JWT claim that contains the user’s principal. Defaults to `username`.
128140

129141
::::
130142

@@ -434,6 +446,8 @@ PKC JSON Web Token Key Sets (JWKS) can contain public RSA and EC keys. HMAC JWKS
434446

435447
JWT realms load a PKC JWKS and an HMAC JWKS or HMAC UTF-8 JWK at startup. JWT realms can also reload PKC JWKS contents at runtime; a reload is triggered by signature validation failures.
436448

449+
JWT realms can also be configured to reload a PKC JWKS periodically in the background.
450+
437451
::::{note}
438452
HMAC JWKS or HMAC UTF-8 JWK reloading is not supported at this time.
439453
::::

deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,11 @@ spec:
5050
count: 1
5151
```
5252
53-
## Rotate auto-generated credentials [k8s-rotate-credentials]
53+
## ECK auto-generated credentials
54+
55+
{{eck}} auto-generates credentials for [the `elastic` user](#k8s-default-elastic-user) and other file-based users. These credentials are stored in Kubernetes Secrets and are labeled with `eck.k8s.elastic.co/credentials=true`.
56+
57+
### Rotate auto-generated credentials [k8s-rotate-credentials]
5458

5559
You can force the auto-generated credentials to be regenerated with new values by deleting the appropriate Secret. For example, to change the password for the `elastic` user from the [quickstart example](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md), use the following command:
5660

@@ -62,7 +66,6 @@ kubectl delete secret quickstart-es-elastic-user
6266
If you are using the `elastic` user credentials in your own applications, they will fail to connect to {{es}} and {{kib}} after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always [create your own users with restricted roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md) to access {{es}}.
6367
::::
6468

65-
6669
To regenerate all auto-generated credentials in a namespace, run the following command:
6770

6871
```sh
@@ -73,6 +76,20 @@ kubectl delete secret -l eck.k8s.elastic.co/credentials=true
7376
This command regenerates auto-generated credentials of **all** {{stack}} applications in the namespace.
7477
::::
7578

79+
### Control the length of auto-generated passwords
80+
81+
```{applies_to}
82+
eck: ga 3.2
83+
```
84+
85+
:::{note}
86+
The ability to control the length of passwords generated by {{eck}} requires an Enterprise license.
87+
:::
88+
89+
You can control the length of auto-generated passwords in {{eck}} installations by setting either `config.policies.passwords.length` in your Helm chart values or `password-length` in the `elastic-operator` `ConfigMap` when installing with YAML manifests. Refer to the [operator configuration documentation](../../deploy/cloud-on-k8s/configure-eck.md) for details on managing these settings.
90+
91+
Changing these values does not update existing passwords. To rotate current credentials, refer to the [Rotate auto-generated credentials](#k8s-rotate-credentials)
92+
7693
## Creating custom users
7794

7895
{{eck}} provides functionality to facilitate custom user creation through various authentication realms. You can create users using the native realm, file realm, or external authentication methods.
@@ -99,4 +116,4 @@ For more information, refer to [External authentication](/deploy-manage/users-ro
99116

100117
ECK facilitates file-based role management through Kubernetes secrets containing the roles specification. Alternatively, you can use the Role management API or the Role management UI in {{kib}}.
101118

102-
Refer to [Managing custom roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md#managing-custom-roles) for details and ECK based examples.
119+
Refer to [Managing custom roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md#managing-custom-roles) for details and ECK based examples.

explore-analyze/report-and-share/automating-report-generation.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -244,9 +244,12 @@ Save time by setting up a recurring task that automatically generates reports an
244244
245245
A message appears, indicating that the schedule is available on the **Reporting** page. From the **Reporting** page, click on the **Schedules** tab to view details for the newly-created schedule.
246246
247-
::::{important}
248-
Note that you cannot edit or delete a schedule after you create it. To stop the schedule from running, you must disable it. Disabling a schedule permanently stops it from running. To restart it, you must create a new schedule.
249-
::::
247+
### Stop scheduled reports [stop-scheduled-reports]
248+
249+
To stop a scheduled report, you can take the following actions from the **Schedules** tab on the **Reporting** page:
250+
251+
- **Disable schedule**: {applies_to}`stack: ga 9.1` Disabling a schedule allows you to keep a record of it on the **Reporting** page, but permanently turns the schedule off. To restart the schedule, you must create a new one.
252+
- **Delete schedule**: {applies_to}`stack: ga 9.3` Deleting a schedule permanently stops it and removes the schedule's record from the **Reporting** page. You can't recover a deleted schedule.
250253
251254
### Scheduled reports limitations [scheduled-reports-limitations]
252255

solutions/observability/incident-management/create-metric-threshold-rule.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,13 @@ When you select **Alert me if there’s no data**, the rule is triggered if the
4747

4848
The **Filters** control the scope of the rule. If used, the rule will only evaluate metric data that matches the query in this field. In this example, the rule will only alert on metrics reported from a Cloud region called `us-east`.
4949

50+
::::{note}
51+
If you've made a rule with the [create rule API](https://www.elastic.co/docs/api/doc/kibana/operation/operation-post-alerting-rule-id) and added Query DSL filters using the `filterQuery` parameter, the filters won't appear in the UI for editing a rule. As a workaround, manually re-add the filters through the UI and save the rule. As you're modifying the rule's filters from the UI, be mindful of the following:
52+
53+
- The **Filter** field only accepts KQL syntax, meaning you may need to manually convert your Query DSL filters to KQL.
54+
- After you save the rule, filters you've added to the **Filter** field are converted appropriately and specified in the rule's `filterQuery` parameter.
55+
::::
56+
5057
The **Group alerts by** creates an instance of the alert for every unique value of the `field` added. For example, you can create a rule per host or every mount point of each host. You can also add multiple fields. In this example, the rule will individually track the status of each `host.name` in your infrastructure. You will only receive an alert about `host-1`, if `host.name: host-1` passes the threshold, but `host-2` and `host-3` do not.
5158

5259
When you select **Alert me if a group stops reporting data**, the rule is triggered if a group that previously reported metrics does not report them again over the expected time period.

solutions/security/get-started/_snippets/agentless-integrations-faq.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,3 +69,7 @@ When you create a new agentless CSPM integration, a new agent policy appears wit
6969
2. Go to the CSPM Integration’s **Integration policies** tab.
7070
3. Find the integration policy for the integration you want to delete. Click **Actions**, then **Delete integration**.
7171
4. Confirm by clicking **Delete integration** again.
72+
73+
## Can agentless integrations use a specific range of static IP addresses for configuring allow and deny rules for traffic?
74+
75+
No, agentless integrations can not use a specific range of static IP addresses for configuring ingress and egress allow and deny rules.

0 commit comments

Comments
 (0)