You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/index.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
= Stackable Operator for Apache ZooKeeper
2
-
:description: The Stackable operator for Apache ZooKeeper is a Kubernetes operator that can manage Apache ZooKeeper ensembles. Learn about its features, resources, dependencies and demos, and see the list of supported ZooKeeper versions.
2
+
:description: Manage Apache ZooKeeper ensembles with the Stackable Kubernetes operator. Supports ZooKeeper versions, custom images, and integrates with Hadoop, Kafka, and more.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/usage_guide/authentication.adoc
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
1
= Authentication
2
+
:description: Enable TLS authentication for ZooKeeper with Stackable's Kubernetes operator.
2
3
3
4
The communication between nodes (server to server) is encrypted via TLS by default.
4
5
In order to enforce TLS authentication for client-to-server communication, you can set an xref:concepts:authentication.adoc[AuthenticationClass] reference in the `spec.clusterConfig.authentication` property.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/usage_guide/encryption.adoc
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,9 @@
1
1
= Encryption
2
+
:description: Quorum and client communication in ZooKeeper are encrypted via TLS by default. Customize certificates with the Secret Operator for added security
2
3
3
-
The quorum and client communication are encrypted by default via TLS. This requires the
4
-
xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates. The utilized
5
-
certificates can be changed in a top-level config.
4
+
The quorum and client communication are encrypted by default via TLS.
5
+
This requires the xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates.
6
+
The utilized certificates can be changed in a top-level config.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/usage_guide/isolating_clients_with_znodes.adoc
+14-8Lines changed: 14 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,11 @@
1
1
= Isolating clients with ZNodes
2
+
:description: Isolate clients in ZooKeeper with unique ZNodes for each product. Set up ZNodes for each product and connect them using discovery ConfigMaps.
2
3
3
-
ZooKeeper is a dependency of many products supported by the Stackable Data Platform. To ensure that all products can use the same ZooKeeper cluster safely, it is important to isolate them which is done using xref:znodes.adoc[].
4
+
ZooKeeper is a dependency of many products supported by the Stackable Data Platform.
5
+
To ensure that all products can use the same ZooKeeper cluster safely, it is important to isolate them which is done using xref:znodes.adoc[].
4
6
5
-
This guide shows you how to set up multiple ZNodes to use with different products from the Stackable Data Platform, using Kafka and Druid as an example. For an explanation of the ZNode concept, read the xref:znodes.adoc[] concept page.
7
+
This guide shows you how to set up multiple ZNodes to use with different products from the Stackable Data Platform, using Kafka and Druid as an example.
8
+
For an explanation of the ZNode concept, read the xref:znodes.adoc[] concept page.
6
9
7
10
== Prerequisites
8
11
@@ -21,7 +24,8 @@ This guide assumes the ZookeeperCluster is called `my-zookeeper` and is running
21
24
22
25
=== Setting up the ZNodes
23
26
24
-
To set up a Kafka and Druid instance to use the ZookeeperCluster, two ZNodes are required, one for each product. This guide assumes the Kafka instance is running in the same namespace as the ZooKeeper, while the Druid instance is running in its own namespace called `druid-ns`.
27
+
To set up a Kafka and Druid instance to use the ZookeeperCluster, two ZNodes are required, one for each product.
28
+
This guide assumes the Kafka instance is running in the same namespace as the ZooKeeper, while the Druid instance is running in its own namespace called `druid-ns`.
<2> The namespace where the ZNode should be created. Since Kafka is running in the same namespace as ZooKeeper, this is the namespace of `my-zookeeper`.
44
48
<3> The ZooKeeper cluster reference. The namespace is omitted here because the ZooKeeper is in the same namespace as the ZNode object.
45
49
46
-
The Stackable Operator for ZooKeeper watches for ZookeeperZnode objects. If one is found it creates the ZNode _inside_ the ZooKeeper cluster and also creates a xref:concepts:service_discovery.adoc[discovery ConfigMap] in the same namespace as the ZookeeperZnode with the same name as the ZookeeperZnode.
50
+
The Stackable Operator for ZooKeeper watches for ZookeeperZnode objects.
51
+
If one is found it creates the ZNode _inside_ the ZooKeeper cluster and also creates a xref:concepts:service_discovery.adoc[discovery ConfigMap] in the same namespace as the ZookeeperZnode with the same name as the ZookeeperZnode.
This ConfigMap can then be mounted into other Pods and the `ZOOKEEPER` key can be used to connect to the ZooKeeper instance and the correct ZNode.
65
70
66
-
All products that need a ZNode can be configured with a `zookeeperConfigMapName` property. As the name implies, this property references the discovery ConfigMap for the requested ZNode.
71
+
All products that need a ZNode can be configured with a `zookeeperConfigMapName` property.
72
+
As the name implies, this property references the discovery ConfigMap for the requested ZNode.
67
73
68
74
For Kafka:
69
75
@@ -105,9 +111,9 @@ You can find out more about the discovery ConfigMap xref:discovery.adoc[] and th
105
111
106
112
For security reasons, a unique ZNode path is generated every time the same ZookeeperZnode object is recreated, even if it has the same name.
107
113
108
-
If a ZookeeperZnode needs to be associated with an existing ZNode path, the field `status.znodePath` can be set to
109
-
the desired path. Note that since this is a subfield of `status`, it must explicitly be updated on the `status` subresource,
110
-
and requires RBAC permissions to replace the `zookeeperznodes/status` resource. For example:
114
+
If a ZookeeperZnode needs to be associated with an existing ZNode path, the field `status.znodePath` can be set to the desired path.
115
+
Note that since this is a subfield of `status`, it must explicitly be updated on the `status` subresource, and requires RBAC permissions to replace the `zookeeperznodes/status` resource.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,8 @@
2
2
3
3
Apache ZooKeeper offers an API. The Operator deploys a service called `<name>` (where `<name>` is the name of the ZookeeperCluster) through which ZooKeeper can be reached.
4
4
5
-
This service can have either the `cluster-internal` or `external-unstable` type. `external-stable` is not supported for ZooKeeper at the moment. Read more about the types in the xref:concepts:service-exposition.adoc[service exposition] documentation at platform level.
5
+
This service can have either the `cluster-internal` or `external-unstable` type. `external-stable` is not supported for ZooKeeper at the moment.
6
+
Read more about the types in the xref:concepts:service-exposition.adoc[service exposition] documentation at platform level.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/usage_guide/log_aggregation.adoc
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
= Log aggregation
2
+
:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent.
2
3
3
-
The logs can be forwarded to a Vector log aggregator by providing a discovery
4
-
ConfigMap for the aggregator and by enabling the log agent:
4
+
The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent:
5
5
6
6
[source,yaml]
7
7
----
@@ -26,5 +26,4 @@ spec:
26
26
replicas: 1
27
27
----
28
28
29
-
Further information on how to configure logging, can be found in
30
-
xref:concepts:logging.adoc[].
29
+
Further information on how to configure logging, can be found in xref:concepts:logging.adoc[].
You can mount volumes where data is stored by specifying https://kubernetes.io/docs/concepts/storage/persistent-volumes[PersistentVolumeClaims] for each individual role group:
7
+
You can mount volumes where data is stored by specifying {pvcs}[PersistentVolumeClaims] for each individual role group:
:description: ZooKeeper uses myid for server identification. Avoid conflicts in multiple role groups by setting myidOffset for unique IDs in each StatefulSet.
ZooKeeper uses a unique ID called _myid_ to identify each server in the cluster. The Stackable Operator for Apache ZooKeeper assigns the _myid_ to each Pod from the https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#ordinal-index[ordinal index] given to the Pod by Kubernetes. This index is unique over the Pods in the StatefulSet of the xref:concepts:roles-and-role-groups.adoc[role group].
5
+
ZooKeeper uses a unique ID called _myid_ to identify each server in the cluster.
6
+
The Stackable Operator for Apache ZooKeeper assigns the _myid_ to each Pod from the {ordinal-index}[ordinal index] given to the Pod by Kubernetes.
7
+
This index is unique over the Pods in the StatefulSet of the xref:concepts:roles-and-role-groups.adoc[role group].
5
8
6
-
When using multiple role groups in a cluster, this will lead to different ZooKeeper Pods using the same _myid_. Each role group is represented by its own StatefulSet, and therefore always identified starting with `0`.
9
+
When using multiple role groups in a cluster, this will lead to different ZooKeeper Pods using the same _myid_.
10
+
Each role group is represented by its own StatefulSet, and therefore always identified starting with `0`.
7
11
8
-
In order to avoid this _myid_ conflict, a property `myidOffset` needs to be specified in each rolegroup. The `myidOffset` defaults to zero, but if specified will be added to the ordinal index of the Pod.
12
+
In order to avoid this _myid_ conflict, a property `myidOffset` needs to be specified in each rolegroup.
13
+
The `myidOffset` defaults to zero, but if specified will be added to the ordinal index of the Pod.
9
14
10
15
== Example configuration
11
16
12
17
Here the property is used on the second role group in a ZooKeeperCluster:
13
18
14
-
```yaml
19
+
[source,yaml]
20
+
----
15
21
apiVersion: zookeeper.stackable.tech/v1alpha1
16
22
kind: ZookeeperCluster
17
23
metadata:
@@ -25,8 +31,9 @@ spec:
25
31
replicas: 1
26
32
config:
27
33
myidOffset: 10 # <1>
28
-
```
29
-
34
+
----
30
35
<1> The `myidOffset` property set to 10 for the secondary role group
31
36
32
-
The `secondary` role group _myid_ starts from id `10`. The `primary` role group will start from `0`. This means, the replicas of the role group `primary` should not be scaled higher than `10` which results in `10` `primary` Pods using a _myid_ from `0` to `9`, followed by the `secondary` Pods starting at _myid_ `10`.
37
+
The `secondary` role group _myid_ starts from id `10`.
38
+
The `primary` role group will start from `0`.
39
+
This means, the replicas of the role group `primary` should not be scaled higher than `10` which results in `10` `primary` Pods using a _myid_ from `0` to `9`, followed by the `secondary` Pods starting at _myid_ `10`.
Copy file name to clipboardExpand all lines: docs/modules/zookeeper/pages/znodes.adoc
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
1
= ZNodes
2
+
:description: Manage ZooKeeper ZNodes with the ZookeeperZnode resource. Each client should use a unique root ZNode to prevent conflicts. Network access to ZooKeeper is required.
2
3
3
4
Apache ZooKeeper organizes all data into a hierarchical system of https://zookeeper.apache.org/doc/r3.9.2/zookeeperProgrammers.html#ch_zkDataModel[ZNodes],
4
5
which act as both files (they can have data associated with them) and folders (they can contain other ZNodes) when compared to a traditional (POSIX-like) file system.
0 commit comments