Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions deploy-manage/autoscaling/autoscaling-in-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ kind: ElasticsearchAutoscaler
metadata:
name: autoscaling-sample
spec:
## The name of the {{es}} cluster to be scaled automatically.
## The name of the Elasticsearch cluster to be scaled automatically.
elasticsearchRef:
name: elasticsearch-sample
## The autoscaling policies.
Expand Down Expand Up @@ -301,7 +301,7 @@ You should adjust those settings manually to match the size of your deployment w

## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling]

::::{note}
::::{note}
This section only applies to stateless applications. Check [{{es}} autoscaling](#k8s-autoscaling) for more details about automatically scaling {{es}}.
::::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ To import a JVM trust store:
1. The URL for the bundle ZIP file must be always available. Make sure you host the plugin artefacts internally in a highly available environment.
2. Wildcards are allowed here, since the certificates are independent from the {{es}} version.

4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `{{es}} cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page:
4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `Elasticsearch cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page:

```sh
"jvm_trust_store": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec
```

4. Install Podman:

* For Podman 4

* Install the latest available version `4.*` using dnf.
Expand Down Expand Up @@ -322,7 +322,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec
vm.max_map_count=262144
# enable forwarding so the Docker networking works as expected
net.ipv4.ip_forward=1
# Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout.
# Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout.
# See /deploy-manage/deploy/self-managed/system-config-tcpretries.md
net.ipv4.tcp_retries2=5
# Make sure the host doesn't swap too early
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
vm.max_map_count=262144
# enable forwarding so the Docker networking works as expected
net.ipv4.ip_forward=1
# Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout.
# Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout.
# See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html
net.ipv4.tcp_retries2=5
# Make sure the host doesn't swap too early
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage
vm.max_map_count=262144
# enable forwarding so the Docker networking works as expected
net.ipv4.ip_forward=1
# Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout.
# Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout.
# See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html
net.ipv4.tcp_retries2=5
# Make sure the host doesn't swap too early
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The process involves two main steps:
2. [Update the {{stack}} pack included in your ECE installation to point to your modified Docker image.](#ece-modify-stack-pack)


## Before you begin [ece_before_you_begin_5]
## Before you begin [ece_before_you_begin_5]

Note the following restrictions:

Expand All @@ -27,7 +27,7 @@ Note the following restrictions:
* The Dockerfile used in this example includes an optimization process that is relatively expensive and may require a machine with several GB of RAM to run successfully.


## Extend a {{kib}} Docker image to include additional plugins [ece-create-modified-docker-image]
## Extend a {{kib}} Docker image to include additional plugins [ece-create-modified-docker-image]

This example runs a Dockerfile to install the [analyze_api_ui plugin](https://github.com/johtani/analyze-api-ui-plugin) or [kibana-enhanced-table](https://github.com/fbaligand/kibana-enhanced-table) into different versions of {{kib}} Docker image. The contents of the Dockerfile varies depending on the version of the {{stack}} pack that you want to modify.

Expand All @@ -46,7 +46,7 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi
* The version of the image
* The plugin name and version number

::::{important}
::::{important}
When you modify a {{kib}} Docker image, make sure you maintain the original image structure and only add the additional plugins.
::::

Expand All @@ -73,7 +73,7 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi



## Modify the {{stack}} pack to point to your modified image [ece-modify-stack-pack]
## Modify the {{stack}} pack to point to your modified image [ece-modify-stack-pack]

Follow these steps to update the {{stack}} pack zip files in your ECE setup to point to your modified Docker image:

Expand All @@ -85,7 +85,7 @@ Follow these steps to update the {{stack}} pack zip files in your ECE setup to p

set -eo pipefail

# Repack a stackpack to modify the {{kib}} image it points to
# Repack a stackpack to modify the Kibana image it points to

NO_COLOR='\033[0m'
ERROR_COLOR='\033[1;31m'
Expand Down Expand Up @@ -152,7 +152,7 @@ Follow these steps to update the {{stack}} pack zip files in your ECE setup to p



## Common causes of problems [ece-custom-plugin-problems]
## Common causes of problems [ece-custom-plugin-problems]

1. If the custom Docker image is not available, make sure that the image has been uploaded to your Docker repository or loaded locally onto each ECE allocator.
2. If the container takes a long time to start, the problem might be that the `reoptimize` step in the Dockerfile did not complete successfully.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ Using Docker or Podman as container runtime is a configuration local to the host
```

4. Install Podman:

* For Podman 4

* Install the latest available version `4.*` using dnf.
Expand Down Expand Up @@ -352,7 +352,7 @@ Using Docker or Podman as container runtime is a configuration local to the host
vm.max_map_count=262144
# enable forwarding so the Docker networking works as expected
net.ipv4.ip_forward=1
# Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout.
# Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout.
# See /deploy-manage/deploy/self-managed/system-config-tcpretries.md
net.ipv4.tcp_retries2=5
# Make sure the host doesn't swap too early
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ Now that you know how to use the APM keystore and customize the server configura
secret:
defaultMode: 420
optional: false
secretName: es-ca # This is the secret that holds the {{es}} CA cert
secretName: es-ca # This is the secret that holds the Elasticsearch CA cert
```


Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s/configure-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ If you use [Operator Lifecycle Manager (OLM)](https://github.com/operator-framew

* Update your [Subscription](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/subscription-config.md) to mount the ConfigMap under `/conf`.

```yaml
```yaml subs=true
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
Expand Down
26 changes: 13 additions & 13 deletions deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object
* Failure policy if the webhook is unavailable (block the operation or continue without validation)


## Defaults provided by ECK [k8s-webhook-defaults]
## Defaults provided by ECK [k8s-webhook-defaults]

When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows:

Expand All @@ -32,12 +32,12 @@ When using the default `operator.yaml` manifest, ECK is installed with a `Valida
* The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire.


## Manual configuration [k8s-webhook-manual-config]
## Manual configuration [k8s-webhook-manual-config]

If you installed ECK without the webhook and want to enable it later on, or if you want to customise the configuration such as providing your own certificates, this section describes the options available to you.


### Configuration options [k8s-webhook-config-options]
### Configuration options [k8s-webhook-config-options]

You can customise almost all aspects of the webhook setup by changing the [operator configuration](configure-eck.md).

Expand All @@ -51,7 +51,7 @@ You can customise almost all aspects of the webhook setup by changing the [opera
| `webhook-port` | 9443 | Port to listen for incoming validation requests. |


### Using your own certificates [k8s-webhook-existing-certs]
### Using your own certificates [k8s-webhook-existing-certs]

This section describes how you can use your own certificates for the webhook instead of letting the operator manage them automatically. There are a few important things to be aware of when going down this route:

Expand All @@ -60,7 +60,7 @@ This section describes how you can use your own certificates for the webhook ins
* You must update the `caBundle` fields in the `ValidatingWebhookConfiguration` yourself. This must be done at the beginning and whenever the certificate is rotated.


#### Use a certificate signed by your own CA [k8s-webhook-own-ca]
#### Use a certificate signed by your own CA [k8s-webhook-own-ca]

* The certificate must have a Subject Alternative Name (SAN) of the form `<service_name>.<namespace>.svc` (for example `elastic-webhook-server.elastic-system.svc`). A typical OpenSSL command to generate such a certificate would be as follows:

Expand All @@ -81,7 +81,7 @@ This section describes how you can use your own certificates for the webhook ins
* Set `webhook-secret` to the name of the secret you have just created (`elastic-webhook-server-custom-cert`)


::::{note}
::::{note}
If you are using the [Helm chart installation method](install-using-helm-chart.md), you can install the operator by running this command:

```sh
Expand All @@ -95,7 +95,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na



#### Use a certificate from cert-manager [k8s-webhook-cert-manager]
#### Use a certificate from cert-manager [k8s-webhook-cert-manager]

This section describes how to use [cert-manager](https://cert-manager.io/) to manage the webhook certificate. It assumes that there is a `ClusterIssuer` named `self-signing-issuer` available.

Expand Down Expand Up @@ -138,7 +138,7 @@ This section describes how to use [cert-manager](https://cert-manager.io/) to ma
* Set `webhook-secret` to the name of the certificate secret (`elastic-webhook-server-cert`)


::::{note}
::::{note}
If you are using the [Helm chart installation method](install-using-helm-chart.md), you can install the operator by running the following command:

```sh
Expand All @@ -152,7 +152,7 @@ helm install elastic-operator elastic/eck-operator -n elastic-system --create-na



## Disable the webhook [k8s-disable-webhook]
## Disable the webhook [k8s-disable-webhook]

To disable the webhook, set the [`enable-webhook`](configure-eck.md) operator configuration flag to `false` and remove the `ValidatingWebhookConfiguration` named `elastic-webhook.k8s.elastic.co`:

Expand All @@ -161,12 +161,12 @@ kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io elas
```


## Troubleshooting [k8s-webhook-troubleshooting]
## Troubleshooting [k8s-webhook-troubleshooting]

You might get errors in your Kubernetes API server logs indicating that it cannot reach the operator service (`elastic-webhook-server`). This could be because no operator pods are available to handle request or because a network policy or a firewall rule is preventing the control plane from accessing the service. To help with troubleshooting, you can change the [`failurePolicy`](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy) of the webhook configuration to `Fail`. This will cause create or update operations to fail if there is an error contacting the webhook. Usually the error message will contain helpful information about the failure that will allow you to diagnose the root cause.


### Resource creation taking too long or timing out [k8s-webhook-troubleshooting-timeouts]
### Resource creation taking too long or timing out [k8s-webhook-troubleshooting-timeouts]

Webhooks require network connectivity between the Kubernetes API server and the operator. If the creation of an {{es}} resource times out with an error message similar to the following, then the Kubernetes API server might be unable to connect to the webhook to validate the manifest.

Expand Down Expand Up @@ -228,10 +228,10 @@ spec:
```


### Updates failing due to validation errors [k8s-webhook-troubleshooting-validation-failure]
### Updates failing due to validation errors [k8s-webhook-troubleshooting-validation-failure]

If your attempts to update a resource fail with an error message similar to the following, you can force the webhook to ignore it by removing the `kubectl.kubernetes.io/last-applied-configuration` annotation from your resource.

```
```txt subs=true
admission webhook "elastic-es-validation-v1.k8s.elastic.co" denied the request: {{es}}.elasticsearch.k8s.elastic.co "quickstart" is invalid: some-misspelled-field: Invalid value: "some-misspelled-field": some-misspelled-field field found in the kubectl.kubernetes.io/last-applied-configuration annotation is unknown
```
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ spec:
# node.store.allow_mmap: false
podTemplate:
spec:
# This init container ensures that the `max_map_count` setting has been applied before starting {{es}}.
# This init container ensures that the `max_map_count` setting has been applied before starting Elasticsearch.
# This is not required, but is encouraged when using the previously mentioned Daemonset to set max_map_count.
# Do not use this if setting config.node.store.allow_mmap: false
initContainers:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ kubectl get -n b scp test-err-stack-config-policy -o jsonpath="{.status}" | jq .
Important events are also reported through Kubernetes events, such as when two config policies conflict or you don’t have the appropriate license:

```sh
54s Warning Unexpected stackconfigpolicy/config-test conflict: resource {{es}} ns1/cluster-a already configured by StackConfigpolicy default/config-test-2
54s Warning Unexpected stackconfigpolicy/config-test conflict: resource Elasticsearch ns1/cluster-a already configured by StackConfigpolicy default/config-test-2
```

```sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ mapped_pages:

Use the following code to create an {{es}} cluster `elasticsearch-sample` and a "passthrough" route to access it:

::::{note}
::::{note}
A namespace other than the default namespaces (default, kube-system, kube-**, openshift-**, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces.
::::


```shell
cat <<EOF | oc apply -n elastic -f -
# This sample sets up an {{es}} cluster with an OpenShift route
# This sample sets up an Elasticsearch cluster with an OpenShift route
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
Expand All @@ -37,7 +37,7 @@ metadata:
spec:
#host: elasticsearch.example.com # override if you don't want to use the host that is automatically generated by OpenShift (<route-name>[-<namespace>].<suffix>)
tls:
termination: passthrough # {{es}} is the TLS endpoint
termination: passthrough # Elasticsearch is the TLS endpoint
insecureEdgeTerminationPolicy: Redirect
to:
kind: Service
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ metadata:
spec:
#host: kibana.example.com # override if you don't want to use the host that is automatically generated by OpenShift (<route-name>[-<namespace>].<suffix>)
tls:
termination: passthrough # {{kib}} is the TLS endpoint
termination: passthrough # Kibana is the TLS endpoint
insecureEdgeTerminationPolicy: Redirect
to:
kind: Service
Expand Down
Loading
Loading