You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+43Lines changed: 43 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -229,6 +229,49 @@ In order to use the logs that are colleceted you must define an output in the co
229
229
230
230
For documentation on how to setup the Fluent Bit output with your logging backend see Fluent Bit's output documentation here: <https://docs.fluentbit.io/manual/pipeline/outputs>
231
231
232
+
233
+
## Adding and Removing Hosts from Clusters
234
+
235
+
### Adding Hosts
236
+
237
+
The MarkLogic Helm chart creates one MarkLogic "host" per Kubernetes pod in a StatefulSet.
238
+
To add a new MarkLogic host to an existing cluster, simply increase the number of pods in your StatefulSet.
239
+
For example, if you want to change the host count of an existing MarkLogic cluster from 2 to 3, run the following Helm command:
Once this deployment is completed, the new MarkLogic host joins the existing cluster.
246
+
To track deployment status, use “**kubectl get pods**” command. This procedure does not automatically create forests on the new host.
247
+
If the host will be managing forests for a database, create them via MarkLogic's administrative UI or APIs once the pod is up and running.
248
+
249
+
### Removing Hosts
250
+
251
+
When scaling a StatefulSet down, Kubernetes will attempt to stop one or more pods in the set to achieve the desired number of pods.
252
+
When doing so, Kubernetes will stop the pod(s), but the storage attached to the pod will remain until you delete the Persistent Volume Claim(s).
253
+
Shutting down a pod from the Kubernetes side does not modify the MarkLogic cluster configuration.
254
+
It only stops the pod, which causes the MarkLogic host to go offline. If there are forests assigned to the stopped host(s), those forests will go offline.
255
+
256
+
The procedure to scale down the number of MarkLogic hosts in a cluster depends on whether or not forests are assigned to the host(s) to be removed and if the goal is to permanently remove the host(s) from the MarkLogic cluster. If there are forests assigned to the host(s) and you want to remove the host(s) from the cluster, follow MarkLogic administrative procedures to migrate the data from the forests assigned to the host(s) you want to shut down to the forests assigned to the remaining hosts in the cluster (see https://docs.marklogic.com/guide/admin/database-rebalancing#id_23094 and
257
+
https://help.marklogic.com/knowledgebase/article/View/507/0/using-the-rebalancer-to-move-the-content-in-one-forest-to-another-location for details).
258
+
Once the data are safely migrated from the forests on the host(s) to be removed, the host can be removed from the MarkLogic cluster. If there are forests assigned to the host(s) but you just want to temporarily shut down the MarkLogic host/pod, the data do not need to be migrated, but the forests will go offline while the host is shut down.
259
+
260
+
For example, once you have migrated any forest data from the third MarkLogic host, you can change the host count on an
261
+
existing MarkLogic cluster from 3 to 2 by running the following Helm command:
Before Kubernetes stops the pod, it makes a call to the MarkLogic host to tell it to shut down with the "fastFailOver" flag set to TRUE. This tells the remaining hosts in the cluster that this host is shutting down and to trigger failover for any replica forests that may be available for forests on this host. There is a two-minute grace period to allow MarkLogic to shut down cleanly before Kubernetes kills the pod.
268
+
269
+
In order to track the host shutdown progress, run the following command:
270
+
```
271
+
kubectl logs pod/terminated-host-pod-name
272
+
```
273
+
274
+
If you are permanently removing the host from the MarkLogic cluster, once the pod is terminated, follow standard MarkLogic administrative procedures using the administrative UI or APIs to remove the MarkLogic host from the cluster. Also, because Kubernetes keeps the Persistent Volume Claims and Persistent Volumes around until they are explicitly deleted, you must manually delete them using the Kubernetes APIs before attempting to scale the hosts in the StatefulSet back up again.
0 commit comments