|
| 1 | +--- |
| 2 | +meta: |
| 3 | +title: Recovering ETCD Database space for a Kapsule/Kosmos cluster |
| 4 | +description: Learn how to reclaim Database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota. |
| 5 | +content: |
| 6 | +h1: Recovering ETCD Database space for a Kapsule/Kosmos cluster |
| 7 | +paragraph: Learn how to reclaim Database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota. |
| 8 | +tags: kubernetes kapsule kosmos etcd |
| 9 | +dates: |
| 10 | +validation: 2025-04-01 |
| 11 | +posted: 2025-04-01 |
| 12 | +categories: |
| 13 | + - containers |
| 14 | +--- |
| 15 | + |
| 16 | +Kubernetes Kapsule clusters have quotas on the space they can occupy on an etcd database. See Kapsule [cluster-types](https://api.scaleway.com/k8s/v1/regions/fr-par/cluster-types) for details on each offer. |
| 17 | +You can see your current cluster space consumption at any time in your cluster grafana dashboard (`Etcd disk usage` panel). |
| 18 | +This guide helps you to free up space on your database to avoid reaching this limit. |
| 19 | + |
| 20 | +<Macro id="requirements" /> |
| 21 | + |
| 22 | +- [Created](/kubernetes/how-to/create-cluster/) a Kubernetes Kapsule cluster |
| 23 | +- [Downloaded](/kubernetes/how-to/connect-cluster-kubectl/) the Kubeconfig |
| 24 | + |
| 25 | +* Looking for unused resources is a good approach, delete any Secrets, large ConfigMaps that are not used anymore in your cluster. |
| 26 | + |
| 27 | + ```sh |
| 28 | + > kubectl -n $namespace delete $ConfigMapName |
| 29 | + ``` |
| 30 | + |
| 31 | +* keep an eye on Helm Charts that are deploying a lot of custom resources (CRDs), they tend to fill up etcd space. You can find them by showing resource kinds |
| 32 | + |
| 33 | + ```sh |
| 34 | + > kubectl api-resources |
| 35 | + NAME SHORTNAMES APIVERSION NAMESPACED KIND |
| 36 | + configmaps cm v1 true ConfigMap |
| 37 | + endpoints ep v1 true Endpoints |
| 38 | + events ev v1 true Event |
| 39 | + cronjobs cj batch/v1 true CronJob |
| 40 | + jobs batch/v1 true Job |
| 41 | + [...] |
| 42 | + ``` |
| 43 | +look for resources with an external apiversion (not _v1_, _apps/v1_, _storage.k8s.io/v1_ or _batch/v1_ for example). |
| 44 | + |
| 45 | +<Message type="note"> |
| 46 | + It is known that cluster with many nodes and with at least one GPU node may have a lot of _nodefeatures_ objects polluting their etcd space. We are working on a long-term fix but for now manually deleting these objects and downsizing the cluster (or upgrading to a dedicated offer with a bigger etcd quota) is the best solution. |
| 47 | +</Message> |
| 48 | + |
| 49 | +* If you have a doubt on space taken by a resource, you can dump it to get its size |
| 50 | + |
| 51 | + ```sh |
| 52 | + > kubectl get nodefeature -n kube-system $node-feature-name -o yaml | wc -c |
| 53 | + 305545 // ~300KiB, big object |
| 54 | + ``` |
0 commit comments