Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/modules/zookeeper/pages/getting_started/first_steps.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ and apply it:
[source,bash]
include::example$getting_started/code/getting_started.sh[tag=install-zookeeper]

The operator will create a ZooKeeper cluster with two replicas.
The operator creates a ZooKeeper cluster with two replicas.
Use kubectl to observe the status of the cluster:

[source,bash]
include::example$getting_started/code/getting_started.sh[tag=watch-zookeeper-rollout]

The Operator deploys readiness probes to make sure the replicas are ready and established a quorum.
Only then will the StatefulSet actually be marked as `Ready`.
You will see
The operator deploys readiness probes to make sure the replicas are ready and established a quorum.
Only then, the StatefulSet is actually marked as `Ready`.
You see

----
partitioned roll out complete: 2 new pods have been updated...
Expand All @@ -42,7 +42,7 @@ include::example$getting_started/code/getting_started.sh[tag=zkcli-ls]
NOTE: You might wonder why the logs are used instead of the output from `kubectl run`.
This is because `kubectl run` sometimes loses lines of the output, a link:https://github.com/kubernetes/kubernetes/issues/27264[known issue].

Among the log output you will see the current list of nodes in the root directory `/`:
Among the log output you see the current list of nodes in the root directory `/`:

[source]
----
Expand Down Expand Up @@ -82,7 +82,7 @@ Have a look at it using
[source,bash]
kubectl describe configmap simple-znode

You will see an output similar to this:
You see an output similar to this:

[source]
ZOOKEEPER:
Expand Down
7 changes: 4 additions & 3 deletions docs/modules/zookeeper/pages/getting_started/index.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
= Getting started

This guide will get you started with Apache ZooKeeper using the Stackable Operator. It will guide you through the installation of the Operator and its dependencies, setting up your first ZooKeeper cluster and connecting to it as well as setting up your first xref:znodes.adoc[ZNode].
This guide gets you started with Apache ZooKeeper using the Stackable Operator.
It guides you through the installation of the Operator and its dependencies, setting up your first ZooKeeper cluster and connecting to it as well as setting up your first xref:znodes.adoc[ZNode].

== Prerequisites

You will need:
You need:

* a Kubernetes cluster
* kubectl
* optional: Helm

Resource sizing depends on cluster type(s), usage and scope, but as a starting point we recommend a minimum of the following resources for this operator:
Resource sizing depends on cluster type(s), usage and scope, but as a starting point a minimum of the following resources is recommended for this operator:

* 0.2 cores (e.g. i5 or similar)
* 256MB RAM
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
:description: Install the Stackable operator for Apache ZooKeeper using stackablectl or Helm.

There are multiple ways to install the Stackable Operator for Apache Zookeeper.
`stackablectl` is the preferred way but Helm is also supported.
xref:management:stackablectl:index.adoc[] is the preferred way, but Helm is also supported.
OpenShift users may prefer installing the operator from the RedHat Certified Operator catalog using the OpenShift web console.

[tabs]
====
stackablectl (recommended)::
+
--
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install
`stackablectl` is the command line tool to interact with Stackable operators and the recommended way to install
Operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.

After you have installed `stackablectl`, use it to install the ZooKeeper Operator and its dependencies:
Expand All @@ -20,7 +20,7 @@ After you have installed `stackablectl`, use it to install the ZooKeeper Operato
include::example$getting_started/code/getting_started.sh[tag=stackablectl-install-operators]
----

The tool will show
The tool prints

[source]
include::example$getting_started/code/install_output.txt[]
Expand All @@ -32,7 +32,7 @@ example, you can use the `--cluster kind` flag to create a Kubernetes cluster wi
Helm::
+
--
You can also use Helm to install the Operators.
You can also use Helm to install the operators.
Add the Stackable Helm repository:
[source,bash]
----
Expand Down
6 changes: 3 additions & 3 deletions docs/modules/zookeeper/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ Within the Stackable Platform, the Stackable operators for xref:hbase:index.adoc

== Getting started

Get started with Apache ZooKeeper and the Stackable operator by following the xref:getting_started/index.adoc[Getting started] guide, it will guide you through the xref:getting_started/installation.adoc[installation] process.
Afterwards, consult the xref:usage_guide/index.adoc[Usage guide] to learn more about configuring ZooKeeper for your needs.
Get started with Apache ZooKeeper and the Stackable operator by following the xref:getting_started/index.adoc[Getting started] guide, it guides you through the xref:getting_started/installation.adoc[installation] process.
Afterward, consult the xref:usage_guide/index.adoc[Usage guide] to learn more about configuring ZooKeeper for your needs.
You can also deploy a <<demos, demo>> to see an example deployment of ZooKeeper together with other data products.

== Operator model
Expand All @@ -45,7 +45,7 @@ Apache ZooKeeper and the Stackable operator have no dependencies besides the xre

== [[demos]]Demos

Apache ZooKeeper is a dependency of xref:hbase:index.adoc[Apache HBase], xref:hdfs:index.adoc[Apache Hadoop HDFS], xref:kafka:index.adoc[Apache Kafka] and xref:nifi:index.adoc[Apache NiFi], thus any demo that uses one or more of these components will also deploy a ZooKeeper ensemble.
Apache ZooKeeper is a dependency of xref:hbase:index.adoc[Apache HBase], xref:hdfs:index.adoc[Apache Hadoop HDFS], xref:kafka:index.adoc[Apache Kafka] and xref:nifi:index.adoc[Apache NiFi], thus any demo that uses one or more of these components also deploys a ZooKeeper ensemble.
Here is the list of the demos that include ZooKeeper:

* xref:demos:data-lakehouse-iceberg-trino-spark.adoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ cargo run -- run --product-config /foo/bar/properties.yaml

*Multiple values:* false

The operator will **only** watch for resources in the provided namespace `test`:
The operator **only** watches for resources in the provided namespace `test`:

[source]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ docker run \

*Multiple values:* false

The operator will **only** watch for resources in the provided namespace `test`:
The operator **only** watches for resources in the provided namespace `test`:

[source]
----
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/zookeeper/pages/usage_guide/authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ include::example$usage_guide/example-cluster-tls-authentication-secret.yaml[]
<4> The SecretClass that is referenced by the AuthenticationClass in order to provide certificates.

If both `spec.clusterConfig.tls.server.secretClass` and `spec.clusterConfig.authentication.authenticationClass` are set,
the authentication class will take precedence over the secret class. The cluster will be encrypted and authenticate only
against the authentication class.
the authentication class takes precedence over the secret class.
The cluster is encrypted and authenticates only against the authentication class.

WARNING: Due to a https://issues.apache.org/jira/browse/ZOOKEEPER-4276[bug] in ZooKeeper, the `clientPort` property in
combination with `client.portUnification=true` is used instead of the `secureClientPort`. This means that unencrypted
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,16 @@ servers:

All property values must be strings.

For a full list of configuration options we refer to the Apache ZooKeeper https://zookeeper.apache.org/doc/r3.9.2/zookeeperAdmin.html#sc_configuration[Configuration Reference].
For a full list of configuration options refer to the Apache ZooKeeper https://zookeeper.apache.org/doc/r3.9.2/zookeeperAdmin.html#sc_configuration[Configuration Reference].

=== Overriding entries in security.properties

The `security.properties` file is used to configure JVM security properties. It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.

The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved. Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them. As of version 3.8.1, Apache ZooKeeper always requires up-to-date IP addresses to maintain its quorum. To guarantee this, the negative DNS cache of the JVM needs to be disabled. This can be achieved by setting the TTL of entries in the negative cache to zero, like this:
The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved.
Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them.
As of version 3.8.1, Apache ZooKeeper always requires up-to-date IP addresses to maintain its quorum.
To guarantee this, the negative DNS cache of the JVM needs to be disabled. This can be achieved by setting the TTL of entries in the negative cache to zero, like this:

[source,yaml]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,14 @@ In this example, two ConfigMaps are created:

=== Connecting the products to the ZNodes

The ConfigMaps with the name and namespaces as given above will look similar to this:
The ConfigMaps with the name and namespaces as given above look similar to this:

[source,yaml]
----
include::example$usage_guide/znode/example-znode-discovery.yaml[]
----
<1> Name and namespaces as specified above
<2> `$PATH` will be a unique and unpredictable path that is generated by the operator
<2> `$PATH` is a unique and unpredictable path that is generated by the operator

This ConfigMap can then be mounted into other Pods and the `ZOOKEEPER` key can be used to connect to the ZooKeeper instance and the correct ZNode.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

This section of the documentation is intended for the operations teams that maintain a Stackable Data Platform installation.

Please read on the xref:concepts:operations/index.adoc[Concepts page on Operations] with the necessary details to operate the platform in a production environment.
Read on the xref:concepts:operations/index.adoc[concepts page on operations] with the necessary details to operate the platform in a production environment.
9 changes: 5 additions & 4 deletions docs/modules/zookeeper/pages/znodes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ IMPORTANT: The Operator connects directly to ZooKeeper to manage the ZNodes insi
== Configuring ZNodes

ZNodes are configured with the ZookeeperZnode CustomResource.
If a ZookeeperZnode resource is created, the operator will create the respective tree in ZooKeeper.
If a ZookeeperZnode resource is created, the operator creates the respective tree in ZooKeeper.
Also, if the resource in Kubernetes is deleted, so is the data in ZooKeeper.

CAUTION: The operator automatically deletes the ZNode from the ZooKeeper cluster if the Kubernetes `ZookeeperZnode` object is deleted. Recreating the
`ZookeeperZnode` object will not restore access to the data.
CAUTION: The operator automatically deletes the ZNode from the ZooKeeper cluster if the Kubernetes ZookeeperZnode object is deleted.
Recreating the ZookeeperZnode object will not restore access to the data.

Here is an example of a ZookeeperZnode:

Expand All @@ -28,7 +28,8 @@ include::example$example-znode.yaml[]
----
<1> The name of the ZNode in ZooKeeper. It is the same as the name of the Kubernetes resource.
<2> Reference to the `ZookeeperCluster` object where the ZNode should be created.
<3> The namespace of the `ZookeeperCluster`. Can be omitted and will default to the namespace of the ZNode object.
<3> The namespace of the `ZookeeperCluster`.
Can be omitted and defaults to the namespace of the ZNode object.

When a ZNode is created, the operator creates the required tree in ZooKeeper and a xref:concepts:service_discovery.adoc[discovery ConfigMap] with a xref:discovery.adoc[] for this ZNode. This discovery ConfigMap is used by other operators to configure clients with access to the ZNode.

Expand Down
Loading