A Helm chart for creating a ClickHouse® Cluster with the Altinity Operator for ClickHouse
- Single-node or multi-node ClickHouse clusters
- Sharding and replication
- ClickHouse Keeper integration
- Persistent storage configuration
- Init scripts
| Repository | Name | Version |
|---|---|---|
| https://helm.altinity.com | operator(altinity-clickhouse-operator) | 0.25.6 |
# add the altinity chart repository
helm repo add altinity https://helm.altinity.com
# use this command to install clickhouse chart (it will also create a `clickhouse` namespace)
helm install release-name altinity/clickhouse --namespace clickhouse --create-namespaceNote that by default the chart includes the Altinity Operator. For most production use cases you will want to disable this and install the operator explicitly from its own helm chart.
# add the altinity operator chart repository
helm repo add altinity https://helm.altinity.com/
# create the namespace
kubectl create namespace clickhouse
# install operator into namespace
helm install clickhouse-operator altinity/altinity-clickhouse-operator \
--namespace clickhouse
# install the clickhouse chart without the operator
helm install release-name altinity/clickhouse --namespace clickhouse \
--set operator.enabled=falseYes, we're aware that the domains for the helm repos are a bit odd. We're working on it.
IMPORTANT: Version 0.3.0 introduces a change that improves reconciliation timing by embedding templates directly in the ClickHouseInstallation resource instead of using separate ClickHouseInstallationTemplate resources.
After upgrading, delete the old ClickHouseInstallationTemplate resources that were created by version 0.2.x:
# List all ClickHouseInstallationTemplate resources
kubectl get clickhouseinstallationtemplates -n clickhouse
# Delete them (replace <release-name> with your actual release name)
kubectl delete clickhouseinstallationtemplates -n clickhouse \
<release-name>-clickhouse-pod \
<release-name>-clickhouse-service \
<release-name>-clickhouse-service-lb \
<release-name>-clickhouse-data \
<release-name>-clickhouse-logsThe ClickHouseInstallation will be updated automatically with embedded templates, resulting in faster reconciliation.
# get latest repository versions
helm repo update
# upgrade to a newer version using the release name (`clickhouse`)
helm upgrade clickhouse altinity/clickhouse --namespace clickhouse# uninstall using the release name (`clickhouse`)
helm uninstall clickhouse --namespace clickhouseNote: If you installed the Altinity Operator with this chart, your ClickHouse Installations will hang because the Operator will be deleted before their finalizers complete. To resolve this you must manually edit each chi resource and remove the finalizer.
PVCs created by this helm chart will not be automatically deleted and must be deleted manually. An easy way to do this is to delete the namespace:
kubectl delete namespace clickhouseThis command removes all the Kubernetes components associated with the chart and deletes the release.
# list your pods
kubectl get pods --namespace clickhouse
# pick any of your available pods and connect through the clickhouse-client
kubectl exec -it chi-clickhouse-0-0-0 --namespace clickhouse -- clickhouse-clientUse
kubectl port forwardto access your ClickHouse cluster from outside:kubectl port-forward service clickhouse-eks 9000:9000 & clickhouse-client
The chart allows mounting a ConfigMap containing initialization scripts that will be executed during the ClickHouse container startup.
- Create a ConfigMap containing your initialization scripts:
kubectl create configmap my-init-scripts --from-file=01_create_database.sh --from-file=02_create_tables.sh- Enable the initScripts feature in your Helm values:
clickhouse:
initScripts:
enabled: true
configMapName: my-init-scripts
alwaysRun: true # Set to true to always run scripts on container restartThe scripts will be mounted at /docker-entrypoint-initdb.d/ in the ClickHouse container and executed in alphabetical order during startup.
#!/bin/bash
set -e
clickhouse client -n <<-EOSQL
CREATE DATABASE IF NOT EXISTS my_database;
CREATE TABLE IF NOT EXISTS my_database.my_table (
id UInt64,
data String
) ENGINE = MergeTree()
ORDER BY id;
EOSQL| Key | Type | Default | Description |
|---|---|---|---|
| clickhouse.antiAffinity | bool | false |
|
| clickhouse.antiAffinityScope | string | ClickHouseInstallation | Scope for anti-affinity policy when antiAffinity is enabled. Determines the level at which pod distribution is enforced. Available scopes: - ClickHouseInstallation: Pods from the same installation won't run on the same node (default) - Shard: Pods from the same shard won't run on the same node - Replica: Pods from the same replica won't run on the same node - Cluster: Pods from the same cluster won't run on the same node - Namespace: Pods from the same namespace won't run on the same node |
| clickhouse.clusterSecret | object | {"auto":true,"enabled":false,"secure":false,"value":"","valueFrom":{"secretKeyRef":{"key":"secret","name":""}}} |
Cluster secret configuration for secure inter-node communication |
| clickhouse.clusterSecret.auto | bool | true |
Auto-generate cluster secret (recommended for security) |
| clickhouse.clusterSecret.enabled | bool | false |
Whether to enable secret-based cluster communication |
| clickhouse.clusterSecret.secure | bool | false |
Whether to secure this behind the SSL port |
| clickhouse.clusterSecret.value | string | "" |
Plaintext cluster secret value (not recommended for production) |
| clickhouse.clusterSecret.valueFrom | object | {"secretKeyRef":{"key":"secret","name":""}} |
Reference to an existing Kubernetes secret containing the cluster secret |
| clickhouse.clusterSecret.valueFrom.secretKeyRef.key | string | "secret" |
Key in the secret that contains the cluster secret value |
| clickhouse.clusterSecret.valueFrom.secretKeyRef.name | string | "" |
Name of the secret containing the cluster secret |
| clickhouse.defaultUser.allowExternalAccess | bool | false |
Allow the default user to access ClickHouse from any IP. If set, will override hostIP to always be 0.0.0.0/0. |
| clickhouse.defaultUser.hostIP | string | "127.0.0.1/32" |
|
| clickhouse.defaultUser.password | string | "" |
|
| clickhouse.defaultUser.password_secret_name | string | "" |
Name of an existing Kubernetes secret containing the default user password. If set, the password will be read from the secret instead of using the password field. The secret should contain a key named 'password'. |
| clickhouse.extraConfig | string | "<clickhouse>\n</clickhouse>\n" |
Miscellanous config for ClickHouse (in xml format) |
| clickhouse.extraContainers | list | [] |
Extra containers for clickhouse pods |
| clickhouse.extraPorts | list | [] |
Additional ports to expose in the ClickHouse container Example: extraPorts: - name: custom-port containerPort: 8080 |
| clickhouse.extraUsers | string | "<clickhouse>\n</clickhouse>\n" |
Additional users config for ClickHouse (in xml format) |
| clickhouse.extraVolumes | list | [] |
Extra volumes for clickhouse pods |
| clickhouse.image.pullPolicy | string | "IfNotPresent" |
|
| clickhouse.image.repository | string | "altinity/clickhouse-server" |
|
| clickhouse.image.tag | string | "25.3.6.10034.altinitystable" |
Override the image tag for a specific version |
| clickhouse.initScripts | object | {"alwaysRun":true,"configMapName":"","enabled":false} |
Init scripts ConfigMap configuration |
| clickhouse.initScripts.alwaysRun | bool | true |
Set to true to always run init scripts on container startup |
| clickhouse.initScripts.configMapName | string | "" |
Name of an existing ConfigMap containing init scripts The scripts will be mounted at /docker-entrypoint-initdb.d/ |
| clickhouse.initScripts.enabled | bool | false |
Set to true to enable init scripts feature |
| clickhouse.keeper | object | {"host":"","port":2181} |
Keeper connection settings for ClickHouse instances. |
| clickhouse.keeper.host | string | "" |
Specify a keeper host. Should be left empty if clickhouse-keeper.enabled is true. Will override the defaults set from clickhouse-keeper.enabled. |
| clickhouse.keeper.port | int | 2181 |
Override the default keeper port |
| clickhouse.lbService.enabled | bool | false |
|
| clickhouse.lbService.loadBalancerSourceRanges | list | [] |
Specify source IP ranges to the LoadBalancer service. If supported by the platform, this will restrict traffic through the cloud-provider load-balancer to the specified client IPs. This is ignored if the cloud-provider does not support the feature. |
| clickhouse.lbService.serviceAnnotations | object | {} |
|
| clickhouse.lbService.serviceLabels | object | {} |
|
| clickhouse.persistence.accessMode | string | "ReadWriteOnce" |
|
| clickhouse.persistence.enabled | bool | true |
enable storage |
| clickhouse.persistence.logs.accessMode | string | "ReadWriteOnce" |
|
| clickhouse.persistence.logs.enabled | bool | false |
enable pvc for logs |
| clickhouse.persistence.logs.size | string | "10Gi" |
size for logs pvc |
| clickhouse.persistence.size | string | "10Gi" |
volume size (per replica) |
| clickhouse.persistence.storageClass | string | "" |
|
| clickhouse.podAnnotations | object | {} |
|
| clickhouse.podLabels | object | {} |
|
| clickhouse.profiles | object | {} |
|
| clickhouse.replicasCount | int | 1 |
number of replicas. If greater than 1, keeper must be enabled or a keeper host should be provided under clickhouse.keeper.host. Will be ignored if zones is set. |
| clickhouse.resources | object | {} |
|
| clickhouse.service.serviceAnnotations | object | {} |
|
| clickhouse.service.serviceLabels | object | {} |
|
| clickhouse.service.type | string | "ClusterIP" |
|
| clickhouse.serviceAccount.annotations | object | {} |
Annotations to add to the service account |
| clickhouse.serviceAccount.create | bool | false |
Specifies whether a service account should be created |
| clickhouse.serviceAccount.name | string | "" |
The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
| clickhouse.settings | object | {} |
|
| clickhouse.shardsCount | int | 1 |
number of shards. |
| clickhouse.users | list | [] |
Configure additional ClickHouse users and per-user settings. |
| clickhouse.zones | list | [] |
|
| keeper.enabled | bool | false |
Whether to enable Keeper. Required for replicated tables. |
| keeper.image | string | "altinity/clickhouse-keeper" |
|
| keeper.localStorage.size | string | "5Gi" |
|
| keeper.localStorage.storageClass | string | "" |
|
| keeper.metricsPort | string | "" |
|
| keeper.nodeSelector | object | {} |
|
| keeper.podAnnotations | object | {} |
|
| keeper.replicaCount | int | 3 |
Number of keeper replicas. Must be an odd number. !! DO NOT CHANGE AFTER INITIAL DEPLOYMENT |
| keeper.resources.cpuLimitsMs | int | 500 |
|
| keeper.resources.cpuRequestsMs | int | 100 |
|
| keeper.resources.memoryLimitsMiB | string | "1Gi" |
|
| keeper.resources.memoryRequestsMiB | string | "512Mi" |
|
| keeper.settings | object | {} |
|
| keeper.tag | string | "25.3.6.10034.altinitystable" |
|
| keeper.tolerations | list | [] |
|
| keeper.volumeClaimAnnotations | object | {} |
|
| keeper.zoneSpread | bool | false |
|
| namespaceDomainPattern | string | "" |
Custom domain pattern used for DNS names of Service and Pod resources. Typically defined by the custom cluster domain of the Kubernetes cluster. The pattern follows the %s C-style printf format, e.g. '%s.svc.my.test'. If not specified, the default namespace domain suffix is .svc.cluster.local. |
| operator.enabled | bool | true |
Whether to enable the Altinity Operator for ClickHouse. Disable if you already have the Operator installed cluster-wide. |