- [](https://web.libera.chat/?channels=#clk)
- [](https://gitter.im/clk-project/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
clk k8s expects docker to be up and available.
Hint
curl -sSL https://get.docker.com | sh
followed by
sudo usermod -a -G docker $(id --user --name)
Now, to install clk k8s, you have two options:
- either, install clk-project and the extension with
curl -sSL https://clk-project.org/install.sh | env CLK_EXTENSIONS=k8s bash - or, if you already have clk, you can simply install this extension with
clk extension install k8s
clk k8s flow allNow, you have a local cluster running and ready to be used. It is that simple.
This extension also provides some tilt functions. To use them, load the extension, like thisconfig.define_string("clk-k8s-local-path")
cfg = config.parse()
v1alpha1.extension_repo(
name='clk-k8s',
url=cfg.get(
'clk-k8s-local-path',
'https://github.com/clk-project/clk_extension_k8s',
),
)
v1alpha1.extension(
name='clk-helpers',
repo_name='clk-k8s',
repo_path='tilt-extensions/helpers',
)
load('ext://clk-helpers', 'update_helm_chart')clk k8s create-cluster, a shared
directory is automatically set up between the host and the kind node:
- Host path:
$(clk extension where-is global/k8s)/tmp/kind/shared - Kind node path:
/shared
This allows exchanging files between the host and pods running in the
cluster. To use it in a deployment, mount a hostPath volume pointing
to /shared:
cat <<'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: shared-volume
spec:
replicas: 1
selector:
matchLabels:
app: shared-volume
template:
metadata:
labels:
app: shared-volume
spec:
containers:
- name: alpine
image: alpine:latest
command: ['sleep', 'infinity']
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
hostPath:
path: /shared
type: DirectoryOrCreate
EOFData written to $(clk extension where-is global/k8s)/tmp/kind/shared on the host is immediately visible
at /shared inside the pod, and vice versa. The data persists across
deployment recreations since it lives on the host filesystem.
Note: If you use PersistentVolumes with hostPath outside of
/tmp/, avoid using persistentVolumeReclaimPolicy: Delete as
Kubernetes’ built-in host_path deleter only supports paths under
/tmp/. Use Retain instead.
You can install the opentelementry operator with clk k8s otel install-operator
so that you only have to provide open telemetry collector custom resources
depending on your need.
clk does not install any observability database like prometheus, loki,
tempo, pyroscope etc. This is because they consume a lot of ram and in our
experience it is enough to distract us from our job.
Because of this, clk is unable to create a collector without prior knowledge on where to send the data.
The command clk k8s otel create-collector is part of the clk k8s flow but
does noting by default, to make it actually create a collector, you have to
add a parameter that will set the option --log-endpoint to whatever endpoint
you have at hand.
So in short, something like, clk k8s otel create-collector --log-endpoint http://mydatabase:4318 will do the trick.
That collector is configured to discover pods with the annotation
io.opentelemetry.discovery.logs/enabled: "true" and send their log to the otlphttp database you configured.