By default, the scripts we use create a local cluster of 4 virtual nodes:
- 1 control-plane
- 1 manager (Prometheus, Grafana, ...)
- 1 node hosting the JobManager
- 1 node hosting the TaskManagers
This value can be modified in the cluster.yaml file by appending new more TaskManagers as follows:
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "tier=taskmanager"From the Jupyter notebook web page, open the init-cluster.ipynb file and execute the whole notebook. This will install the required python libraries, create the cluster, and install the services using Helm.
$ sudo chmod 666 /var/run/docker.sockTo use the build Docker images, they need to be available from all nodes. Instead of pushing the previously built images to a public repository, we can use the following commands to locally load them into the nodes created by Kind.
$ kind load docker-image flink-justin:dais
$ kind load docker-image flink-kubernetes-operator:daisThe Flink Kubernetes Operator is responsible of creating JobManagers, TaskManagers, and deploying Flink jobs on newly spawned workers. It also holds the logic of the autoscaler.
To deploy it using Helm from the project root directory:
# From the root directory
$ helm install flink-kubernetes-operator ./flink-kubernetes-operator/helm/flink-kubernetes-operator --set image.repository=flink-kubernetes-operator --set image.tag=dais -f ./flink-kubernetes-operator/examples/autoscaling/values.yamlTo ensure that the operator is running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
flink-kubernetes-operator-6569cb9b96-q4dbc 2/2 Running 0 3m25sWe are now all set to deploy a Flink job!