From a3c4ea52e8ae8305f8186821953ecb4d91eefa1d Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Thu, 12 Sep 2024 15:36:59 +0200 Subject: [PATCH 1/5] Add descriptions --- docs/modules/zookeeper/pages/index.adoc | 2 +- .../pages/usage_guide/authentication.adoc | 1 + .../pages/usage_guide/encryption.adoc | 7 +++--- .../isolating_clients_with_znodes.adoc | 22 +++++++++++------- .../pages/usage_guide/listenerclass.adoc | 3 ++- .../pages/usage_guide/log_aggregation.adoc | 7 +++--- .../pages/usage_guide/monitoring.adoc | 5 ++-- .../usage_guide/resource_configuration.adoc | 4 +++- .../using_multiple_role_groups.adoc | 23 ++++++++++++------- docs/modules/zookeeper/pages/znodes.adoc | 1 + 10 files changed, 47 insertions(+), 28 deletions(-) diff --git a/docs/modules/zookeeper/pages/index.adoc b/docs/modules/zookeeper/pages/index.adoc index c5854697..c0e760dd 100644 --- a/docs/modules/zookeeper/pages/index.adoc +++ b/docs/modules/zookeeper/pages/index.adoc @@ -1,5 +1,5 @@ = Stackable Operator for Apache ZooKeeper -:description: The Stackable operator for Apache ZooKeeper is a Kubernetes operator that can manage Apache ZooKeeper ensembles. Learn about its features, resources, dependencies and demos, and see the list of supported ZooKeeper versions. +:description: Manage Apache ZooKeeper ensembles with the Stackable Kubernetes operator. Supports ZooKeeper versions, custom images, and integrates with Hadoop, Kafka, and more. :keywords: Stackable operator, Hadoop, Apache ZooKeeper, Kubernetes, k8s, operator, metadata, storage, cluster :zookeeper: https://zookeeper.apache.org/ :github: https://github.com/stackabletech/zookeeper-operator/ diff --git a/docs/modules/zookeeper/pages/usage_guide/authentication.adoc b/docs/modules/zookeeper/pages/usage_guide/authentication.adoc index 842f4c02..5a6eded6 100644 --- a/docs/modules/zookeeper/pages/usage_guide/authentication.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/authentication.adoc @@ -1,4 +1,5 @@ = Authentication +:description: Enable TLS authentication for ZooKeeper with Stackable's Kubernetes operator. The communication between nodes (server to server) is encrypted via TLS by default. In order to enforce TLS authentication for client-to-server communication, you can set an xref:concepts:authentication.adoc[AuthenticationClass] reference in the `spec.clusterConfig.authentication` property. diff --git a/docs/modules/zookeeper/pages/usage_guide/encryption.adoc b/docs/modules/zookeeper/pages/usage_guide/encryption.adoc index 2268b4dd..0634d240 100644 --- a/docs/modules/zookeeper/pages/usage_guide/encryption.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/encryption.adoc @@ -1,8 +1,9 @@ = Encryption +:description: Quorum and client communication in ZooKeeper are encrypted via TLS by default. Customize certificates with the Secret Operator for added security -The quorum and client communication are encrypted by default via TLS. This requires the -xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates. The utilized -certificates can be changed in a top-level config. +The quorum and client communication are encrypted by default via TLS. +This requires the xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates. +The utilized certificates can be changed in a top-level config. [source,yaml] ---- diff --git a/docs/modules/zookeeper/pages/usage_guide/isolating_clients_with_znodes.adoc b/docs/modules/zookeeper/pages/usage_guide/isolating_clients_with_znodes.adoc index 680760c1..e4bf93b5 100644 --- a/docs/modules/zookeeper/pages/usage_guide/isolating_clients_with_znodes.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/isolating_clients_with_znodes.adoc @@ -1,8 +1,11 @@ = Isolating clients with ZNodes +:description: Isolate clients in ZooKeeper with unique ZNodes for each product. Set up ZNodes for each product and connect them using discovery ConfigMaps. -ZooKeeper is a dependency of many products supported by the Stackable Data Platform. To ensure that all products can use the same ZooKeeper cluster safely, it is important to isolate them which is done using xref:znodes.adoc[]. +ZooKeeper is a dependency of many products supported by the Stackable Data Platform. +To ensure that all products can use the same ZooKeeper cluster safely, it is important to isolate them which is done using xref:znodes.adoc[]. -This guide shows you how to set up multiple ZNodes to use with different products from the Stackable Data Platform, using Kafka and Druid as an example. For an explanation of the ZNode concept, read the xref:znodes.adoc[] concept page. +This guide shows you how to set up multiple ZNodes to use with different products from the Stackable Data Platform, using Kafka and Druid as an example. +For an explanation of the ZNode concept, read the xref:znodes.adoc[] concept page. == Prerequisites @@ -21,7 +24,8 @@ This guide assumes the ZookeeperCluster is called `my-zookeeper` and is running === Setting up the ZNodes -To set up a Kafka and Druid instance to use the ZookeeperCluster, two ZNodes are required, one for each product. This guide assumes the Kafka instance is running in the same namespace as the ZooKeeper, while the Druid instance is running in its own namespace called `druid-ns`. +To set up a Kafka and Druid instance to use the ZookeeperCluster, two ZNodes are required, one for each product. +This guide assumes the Kafka instance is running in the same namespace as the ZooKeeper, while the Druid instance is running in its own namespace called `druid-ns`. First, the Druid ZNode: @@ -43,7 +47,8 @@ include::example$usage_guide/znode/example-znode-kafka.yaml[] <2> The namespace where the ZNode should be created. Since Kafka is running in the same namespace as ZooKeeper, this is the namespace of `my-zookeeper`. <3> The ZooKeeper cluster reference. The namespace is omitted here because the ZooKeeper is in the same namespace as the ZNode object. -The Stackable Operator for ZooKeeper watches for ZookeeperZnode objects. If one is found it creates the ZNode _inside_ the ZooKeeper cluster and also creates a xref:concepts:service_discovery.adoc[discovery ConfigMap] in the same namespace as the ZookeeperZnode with the same name as the ZookeeperZnode. +The Stackable Operator for ZooKeeper watches for ZookeeperZnode objects. +If one is found it creates the ZNode _inside_ the ZooKeeper cluster and also creates a xref:concepts:service_discovery.adoc[discovery ConfigMap] in the same namespace as the ZookeeperZnode with the same name as the ZookeeperZnode. In this example, two ConfigMaps are created: @@ -63,7 +68,8 @@ include::example$usage_guide/znode/example-znode-discovery.yaml[] This ConfigMap can then be mounted into other Pods and the `ZOOKEEPER` key can be used to connect to the ZooKeeper instance and the correct ZNode. -All products that need a ZNode can be configured with a `zookeeperConfigMapName` property. As the name implies, this property references the discovery ConfigMap for the requested ZNode. +All products that need a ZNode can be configured with a `zookeeperConfigMapName` property. +As the name implies, this property references the discovery ConfigMap for the requested ZNode. For Kafka: @@ -105,9 +111,9 @@ You can find out more about the discovery ConfigMap xref:discovery.adoc[] and th For security reasons, a unique ZNode path is generated every time the same ZookeeperZnode object is recreated, even if it has the same name. -If a ZookeeperZnode needs to be associated with an existing ZNode path, the field `status.znodePath` can be set to -the desired path. Note that since this is a subfield of `status`, it must explicitly be updated on the `status` subresource, -and requires RBAC permissions to replace the `zookeeperznodes/status` resource. For example: +If a ZookeeperZnode needs to be associated with an existing ZNode path, the field `status.znodePath` can be set to the desired path. +Note that since this is a subfield of `status`, it must explicitly be updated on the `status` subresource, and requires RBAC permissions to replace the `zookeeperznodes/status` resource. +For example: [source,bash] ---- diff --git a/docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc b/docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc index eaa2daa8..037f934f 100644 --- a/docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc @@ -2,7 +2,8 @@ Apache ZooKeeper offers an API. The Operator deploys a service called `` (where `` is the name of the ZookeeperCluster) through which ZooKeeper can be reached. -This service can have either the `cluster-internal` or `external-unstable` type. `external-stable` is not supported for ZooKeeper at the moment. Read more about the types in the xref:concepts:service-exposition.adoc[service exposition] documentation at platform level. +This service can have either the `cluster-internal` or `external-unstable` type. `external-stable` is not supported for ZooKeeper at the moment. +Read more about the types in the xref:concepts:service-exposition.adoc[service exposition] documentation at platform level. This is how the listener class is configured: diff --git a/docs/modules/zookeeper/pages/usage_guide/log_aggregation.adoc b/docs/modules/zookeeper/pages/usage_guide/log_aggregation.adoc index d5882558..cc9a529e 100644 --- a/docs/modules/zookeeper/pages/usage_guide/log_aggregation.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/log_aggregation.adoc @@ -1,7 +1,7 @@ = Log aggregation +:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent. -The logs can be forwarded to a Vector log aggregator by providing a discovery -ConfigMap for the aggregator and by enabling the log agent: +The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent: [source,yaml] ---- @@ -26,5 +26,4 @@ spec: replicas: 1 ---- -Further information on how to configure logging, can be found in -xref:concepts:logging.adoc[]. +Further information on how to configure logging, can be found in xref:concepts:logging.adoc[]. diff --git a/docs/modules/zookeeper/pages/usage_guide/monitoring.adoc b/docs/modules/zookeeper/pages/usage_guide/monitoring.adoc index 882cdcd1..f50ad9f0 100644 --- a/docs/modules/zookeeper/pages/usage_guide/monitoring.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/monitoring.adoc @@ -1,4 +1,5 @@ = Monitoring +:description: The managed ZooKeeper instances are automatically configured to export Prometheus metrics. -The managed ZooKeeper instances are automatically configured to export Prometheus metrics. See -xref:operators:monitoring.adoc[] for more details. +The managed ZooKeeper instances are automatically configured to export Prometheus metrics. +See xref:operators:monitoring.adoc[] for more details. diff --git a/docs/modules/zookeeper/pages/usage_guide/resource_configuration.adoc b/docs/modules/zookeeper/pages/usage_guide/resource_configuration.adoc index 8b94149b..4ccde3a2 100644 --- a/docs/modules/zookeeper/pages/usage_guide/resource_configuration.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/resource_configuration.adoc @@ -1,8 +1,10 @@ = Storage and resource configuration +:description: Configure ZooKeeper storage with PersistentVolumeClaims and set resource requests for CPU, memory, and storage. +:pvcs: https://kubernetes.io/docs/concepts/storage/persistent-volumes == Storage for data volumes -You can mount volumes where data is stored by specifying https://kubernetes.io/docs/concepts/storage/persistent-volumes[PersistentVolumeClaims] for each individual role group: +You can mount volumes where data is stored by specifying {pvcs}[PersistentVolumeClaims] for each individual role group: [source,yaml] ---- diff --git a/docs/modules/zookeeper/pages/usage_guide/using_multiple_role_groups.adoc b/docs/modules/zookeeper/pages/usage_guide/using_multiple_role_groups.adoc index 82a8805e..2ad1709f 100644 --- a/docs/modules/zookeeper/pages/usage_guide/using_multiple_role_groups.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/using_multiple_role_groups.adoc @@ -1,17 +1,23 @@ = Using multiple role groups +:description: ZooKeeper uses myid for server identification. Avoid conflicts in multiple role groups by setting myidOffset for unique IDs in each StatefulSet. +:ordinal-index: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#ordinal-index -// abstract/summary -ZooKeeper uses a unique ID called _myid_ to identify each server in the cluster. The Stackable Operator for Apache ZooKeeper assigns the _myid_ to each Pod from the https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#ordinal-index[ordinal index] given to the Pod by Kubernetes. This index is unique over the Pods in the StatefulSet of the xref:concepts:roles-and-role-groups.adoc[role group]. +ZooKeeper uses a unique ID called _myid_ to identify each server in the cluster. +The Stackable Operator for Apache ZooKeeper assigns the _myid_ to each Pod from the {ordinal-index}[ordinal index] given to the Pod by Kubernetes. +This index is unique over the Pods in the StatefulSet of the xref:concepts:roles-and-role-groups.adoc[role group]. -When using multiple role groups in a cluster, this will lead to different ZooKeeper Pods using the same _myid_. Each role group is represented by its own StatefulSet, and therefore always identified starting with `0`. +When using multiple role groups in a cluster, this will lead to different ZooKeeper Pods using the same _myid_. +Each role group is represented by its own StatefulSet, and therefore always identified starting with `0`. -In order to avoid this _myid_ conflict, a property `myidOffset` needs to be specified in each rolegroup. The `myidOffset` defaults to zero, but if specified will be added to the ordinal index of the Pod. +In order to avoid this _myid_ conflict, a property `myidOffset` needs to be specified in each rolegroup. +The `myidOffset` defaults to zero, but if specified will be added to the ordinal index of the Pod. == Example configuration Here the property is used on the second role group in a ZooKeeperCluster: -```yaml +[source,yaml] +---- apiVersion: zookeeper.stackable.tech/v1alpha1 kind: ZookeeperCluster metadata: @@ -25,8 +31,9 @@ spec: replicas: 1 config: myidOffset: 10 # <1> -``` - +---- <1> The `myidOffset` property set to 10 for the secondary role group -The `secondary` role group _myid_ starts from id `10`. The `primary` role group will start from `0`. This means, the replicas of the role group `primary` should not be scaled higher than `10` which results in `10` `primary` Pods using a _myid_ from `0` to `9`, followed by the `secondary` Pods starting at _myid_ `10`. +The `secondary` role group _myid_ starts from id `10`. +The `primary` role group will start from `0`. +This means, the replicas of the role group `primary` should not be scaled higher than `10` which results in `10` `primary` Pods using a _myid_ from `0` to `9`, followed by the `secondary` Pods starting at _myid_ `10`. diff --git a/docs/modules/zookeeper/pages/znodes.adoc b/docs/modules/zookeeper/pages/znodes.adoc index 22fb9eaa..86432804 100644 --- a/docs/modules/zookeeper/pages/znodes.adoc +++ b/docs/modules/zookeeper/pages/znodes.adoc @@ -1,4 +1,5 @@ = ZNodes +:description: Manage ZooKeeper ZNodes with the ZookeeperZnode resource. Each client should use a unique root ZNode to prevent conflicts. Network access to ZooKeeper is required. Apache ZooKeeper organizes all data into a hierarchical system of https://zookeeper.apache.org/doc/r3.9.2/zookeeperProgrammers.html#ch_zkDataModel[ZNodes], which act as both files (they can have data associated with them) and folders (they can contain other ZNodes) when compared to a traditional (POSIX-like) file system. From 3c460d459b8dd7dd4973fe099e5ebfee0c2eac6d Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Tue, 17 Sep 2024 19:03:09 +0200 Subject: [PATCH 2/5] Improve install instructions --- .../pages/getting_started/installation.adoc | 35 ++++++++++--------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/docs/modules/zookeeper/pages/getting_started/installation.adoc b/docs/modules/zookeeper/pages/getting_started/installation.adoc index 383590a9..a948c965 100644 --- a/docs/modules/zookeeper/pages/getting_started/installation.adoc +++ b/docs/modules/zookeeper/pages/getting_started/installation.adoc @@ -1,16 +1,13 @@ = Installation +:description: Install the Stackable operator for Apache ZooKeeper using stackablectl or Helm. -On this page you will install the Stackable ZooKeeper Operator. - -== Stackable Operators - -There are 2 ways to run Stackable Operators - -. Using xref:management:stackablectl:index.adoc[] (recommended) -. Using Helm - -=== stackablectl +Follow the instructions below to install the Stackable operator for Apache ZooKeeper, using either `stackablectl` or Helm: +[tabs] +==== +stackablectl (recommended):: ++ +-- `stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install Operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform. @@ -28,10 +25,13 @@ include::example$getting_started/code/install_output.txt[] TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`. For example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind]. +-- -=== Helm - -You can also use Helm to install the Operators. Add the Stackable Helm repository: +Helm:: ++ +-- +You can also use Helm to install the Operators. +Add the Stackable Helm repository: [source,bash] ---- include::example$getting_started/code/getting_started.sh[tag=helm-add-repo] @@ -43,9 +43,10 @@ Then install the Stackable Operators: include::example$getting_started/code/getting_started.sh[tag=helm-install-operators] ---- -Helm will deploy the Operators in a Kubernetes Deployment and apply the CRDs for the ZooKeeper cluster. You are now -ready to deploy Apache ZooKeeper in Kubernetes. +Helm deploys the operators in Kubernetes Deployments and applies the CRDs for the ZooKeeperCluster Stacklet. +-- +==== -== What's next +== What's next? -xref:getting_started/first_steps.adoc[Set up a ZooKeeper cluster]. +Use the operator to xref:getting_started/first_steps.adoc[deploy a ZooKeeper Stacklet]. From 001bb3912840fd081617055227927b0d19c3d068 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Tue, 17 Sep 2024 19:08:53 +0200 Subject: [PATCH 3/5] Formatting --- .../pages/getting_started/first_steps.adoc | 33 +++++++++++++------ 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/docs/modules/zookeeper/pages/getting_started/first_steps.adoc b/docs/modules/zookeeper/pages/getting_started/first_steps.adoc index deb8a48f..48081467 100644 --- a/docs/modules/zookeeper/pages/getting_started/first_steps.adoc +++ b/docs/modules/zookeeper/pages/getting_started/first_steps.adoc @@ -4,7 +4,8 @@ Now that the operator is installed it is time to deploy a ZooKeeper cluster and == Deploy ZooKeeper -The ZooKeeper cluster is deployed with a very simple resource definition. Create a file called `zookeeper.yaml`: +The ZooKeeper cluster is deployed with a very simple resource definition. +Create a file called `zookeeper.yaml`: [source,yaml] include::example$getting_started/code/zookeeper.yaml[] @@ -13,12 +14,15 @@ and apply it: [source,bash] include::example$getting_started/code/getting_started.sh[tag=install-zookeeper] -The operator will create a ZooKeeper cluster with two replicas. Use kubectl to observe the status of the cluster: +The operator will create a ZooKeeper cluster with two replicas. +Use kubectl to observe the status of the cluster: [source,bash] include::example$getting_started/code/getting_started.sh[tag=watch-zookeeper-rollout] -The Operator deploys readiness probes to make sure the replicas are ready and established a quorum. Only then will the StatefulSet actually be marked as `Ready`. You will see +The Operator deploys readiness probes to make sure the replicas are ready and established a quorum. +Only then will the StatefulSet actually be marked as `Ready`. +You will see ---- partitioned roll out complete: 2 new pods have been updated... @@ -28,12 +32,15 @@ The ZooKeeper cluster is now ready. == Deploy a ZNode -ZooKeeper manages its data in a hierarchical node system. You can look at the nodes using the zkCli tool. It is included inside the Stackable ZooKeeper container, and you can invoke it using `kubectl run`: +ZooKeeper manages its data in a hierarchical node system. +You can look at the nodes using the zkCli tool. +It is included inside the Stackable ZooKeeper container, and you can invoke it using `kubectl run`: [source,bash] include::example$getting_started/code/getting_started.sh[tag=zkcli-ls] -NOTE: You might wonder why the logs are used instead of the output from `kubectl run`. This is because `kubectl run` sometimes loses lines of the output, a link:https://github.com/kubernetes/kubernetes/issues/27264[known issue]. +NOTE: You might wonder why the logs are used instead of the output from `kubectl run`. +This is because `kubectl run` sometimes loses lines of the output, a link:https://github.com/kubernetes/kubernetes/issues/27264[known issue]. Among the log output you will see the current list of nodes in the root directory `/`: @@ -45,7 +52,8 @@ Among the log output you will see the current list of nodes in the root director The `zookeeper` node contains ZooKeeper configuration data. It is useful to use different nodes for different applications using ZooKeeper, and the Stackable Operator uses xref:znodes.adoc[ZNodes] for this. -ZNodes are created with manifest files of the kind `ZookeeperZnode`. Create a file called `znode.yaml` with the following contents: +ZNodes are created with manifest files of the kind `ZookeeperZnode`. +Create a file called `znode.yaml` with the following contents: [source,yaml] include::example$getting_started/code/znode.yaml[] @@ -68,7 +76,8 @@ and the ZNode has appeared in the output: == The discovery ConfigMap -The operator creates a ConfigMap with connection information that has the same name as the ZNode - in this case `simple-znode`. Have a look at it using +The operator creates a ConfigMap with connection information that has the same name as the ZNode - in this case `simple-znode`. +Have a look at it using [source,bash] kubectl describe configmap simple-znode @@ -87,11 +96,15 @@ ZOOKEEPER_HOSTS: simple-zk-server-default-0.simple-zk-server-default.default.svc.cluster.local:2282,simple-zk-server-default-1.simple-zk-server-default.default.svc.cluster.local:2282 The `ZOOKEEPER` entry contains a ZooKeeper connection string that you can use to connect to this specific ZNode. -The `ZOOKEEPER_CHROOT` and `ZOOKEEPER_HOSTS` entries contain the node name and hosts list respectively. You can use these three entries mounted into a pod to connect to ZooKeeper at this specific ZNode and read/write in that ZNode directory. +The `ZOOKEEPER_CHROOT` and `ZOOKEEPER_HOSTS` entries contain the node name and hosts list respectively. +You can use these three entries mounted into a pod to connect to ZooKeeper at this specific ZNode and read/write in that ZNode directory. -Great! This step concludes the Getting started guide. You have installed the ZooKeeper Operator and its dependencies and set up your first ZooKeeper cluster as well as your first ZNode. +Great! +This step concludes the Getting started guide. +You have installed the ZooKeeper Operator and its dependencies and set up your first ZooKeeper cluster as well as your first ZNode. == What's next -Have a look at the xref:usage_guide/index.adoc[] to learn more about configuration options for your ZooKeeper cluster like setting up encryption or authentication. You can also have a look at the xref:znodes.adoc[] page to learn more about ZNodes. +Have a look at the xref:usage_guide/index.adoc[] to learn more about configuration options for your ZooKeeper cluster like setting up encryption or authentication. +You can also have a look at the xref:znodes.adoc[] page to learn more about ZNodes. From 50a71e9e861fc9bdaa818a59451dec84cdf79972 Mon Sep 17 00:00:00 2001 From: Felix Hennig Date: Wed, 18 Sep 2024 15:20:40 +0200 Subject: [PATCH 4/5] Add pod-overrides info --- .../usage_guide/configuration_environment_overrides.adoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/modules/zookeeper/pages/usage_guide/configuration_environment_overrides.adoc b/docs/modules/zookeeper/pages/usage_guide/configuration_environment_overrides.adoc index cf5eda31..68328695 100644 --- a/docs/modules/zookeeper/pages/usage_guide/configuration_environment_overrides.adoc +++ b/docs/modules/zookeeper/pages/usage_guide/configuration_environment_overrides.adoc @@ -85,3 +85,8 @@ servers: default: replicas: 1 ---- + +== Pod overrides + +The ZooKeeper operator also supports Pod overrides, allowing you to override any property that you can set on a Kubernetes Pod. +Read the xref:concepts:overrides.adoc#pod-overrides[Pod overrides documentation] to learn more about this feature. From 2fd4bf0a83d2a6472282bc40fcbb9719b9a38ca1 Mon Sep 17 00:00:00 2001 From: Razvan-Daniel Mihai <84674+razvan@users.noreply.github.com> Date: Thu, 19 Sep 2024 09:16:01 +0200 Subject: [PATCH 5/5] mention OpenShift in the installation guide --- .../modules/zookeeper/pages/getting_started/installation.adoc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/modules/zookeeper/pages/getting_started/installation.adoc b/docs/modules/zookeeper/pages/getting_started/installation.adoc index a948c965..b562611f 100644 --- a/docs/modules/zookeeper/pages/getting_started/installation.adoc +++ b/docs/modules/zookeeper/pages/getting_started/installation.adoc @@ -1,7 +1,9 @@ = Installation :description: Install the Stackable operator for Apache ZooKeeper using stackablectl or Helm. -Follow the instructions below to install the Stackable operator for Apache ZooKeeper, using either `stackablectl` or Helm: +There are multiple ways to install the Stackable Operator for Apache Zookeeper. +`stackablectl` is the preferred way but Helm is also supported. +OpenShift users may prefer installing the operator from the RedHat Certified Operator catalog using the OpenShift web console. [tabs] ====