Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions pages/kubernetes/how-to/recover-space-etcd.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
meta:
title: Recovering ETCD database space for a Kapsule/Kosmos cluster
description: Learn how to reclaim database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota.
content:
h1: Recovering ETCD database space for a Kapsule/Kosmos cluster
paragraph: Learn how to reclaim database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota.
tags: kubernetes kapsule kosmos etcd
dates:
validation: 2025-04-01
posted: 2025-04-01
categories:
- containers
---

Kubernetes Kapsule clusters have quotas on the space they can occupy on an etcd database. See Kapsule [cluster-types](https://api.scaleway.com/k8s/v1/regions/fr-par/cluster-types) for details on each offer.
You can see your current cluster space consumption at any time in your cluster grafana dashboard (`Etcd disk usage` panel).
This guide helps you to free up space on your database to avoid reaching this limit.

<Macro id="requirements" />

- [Created](/kubernetes/how-to/create-cluster/) a Kubernetes Kapsule cluster
- [Downloaded](/kubernetes/how-to/connect-cluster-kubectl/) the Kubeconfig

* Looking for unused resources is a good approach, delete any Secrets, large ConfigMaps that are not used anymore in your cluster.

```sh
> kubectl -n $namespace delete $ConfigMapName
```

* keep an eye on Helm Charts that are deploying a lot of custom resources (CRDs), they tend to fill up etcd space. You can find them by showing resource kinds

```sh
> kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
[...]
```
Look for resources with an external apiversion (not _v1_, _apps/v1_, _storage.k8s.io/v1_ or _batch/v1_ for example).

<Message type="note">
It is known that cluster with many nodes and with at least one GPU node may have a lot of _nodefeatures_ objects polluting their etcd space. We are working on a long-term fix but for now manually deleting these objects and downsizing the cluster (or upgrading to a dedicated offer with a bigger etcd quota) is the best solution.
</Message>

* If you have a doubt on space taken by a resource, you can dump it to get its size

```sh
> kubectl get nodefeature -n kube-system $node-feature-name -o yaml | wc -c
305545 // ~300KiB, big object
```