diff --git a/pages/kubernetes/how-to/recover-space-etcd.mdx b/pages/kubernetes/how-to/recover-space-etcd.mdx
new file mode 100644
index 0000000000..c5c1a2b2d9
--- /dev/null
+++ b/pages/kubernetes/how-to/recover-space-etcd.mdx
@@ -0,0 +1,54 @@
+---
+meta:
+title: Recovering ETCD database space for a Kapsule/Kosmos cluster
+description: Learn how to reclaim database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota.
+content:
+h1: Recovering ETCD database space for a Kapsule/Kosmos cluster
+paragraph: Learn how to reclaim database space for your Kubernetes Kapsule and Kosmos clusters to stay below your quota.
+tags: kubernetes kapsule kosmos etcd
+dates:
+validation: 2025-04-01
+posted: 2025-04-01
+categories:
+ - containers
+---
+
+Kubernetes Kapsule clusters have quotas on the space they can occupy on an etcd database. See Kapsule [cluster-types](https://api.scaleway.com/k8s/v1/regions/fr-par/cluster-types) for details on each offer.
+You can see your current cluster space consumption at any time in your cluster grafana dashboard (`Etcd disk usage` panel).
+This guide helps you to free up space on your database to avoid reaching this limit.
+
+
+
+- [Created](/kubernetes/how-to/create-cluster/) a Kubernetes Kapsule cluster
+- [Downloaded](/kubernetes/how-to/connect-cluster-kubectl/) the Kubeconfig
+
+* Looking for unused resources is a good approach, delete any Secrets, large ConfigMaps that are not used anymore in your cluster.
+
+ ```sh
+ > kubectl -n $namespace delete $ConfigMapName
+ ```
+
+* keep an eye on Helm Charts that are deploying a lot of custom resources (CRDs), they tend to fill up etcd space. You can find them by showing resource kinds
+
+ ```sh
+ > kubectl api-resources
+ NAME SHORTNAMES APIVERSION NAMESPACED KIND
+ configmaps cm v1 true ConfigMap
+ endpoints ep v1 true Endpoints
+ events ev v1 true Event
+ cronjobs cj batch/v1 true CronJob
+ jobs batch/v1 true Job
+ [...]
+ ```
+Look for resources with an external apiversion (not _v1_, _apps/v1_, _storage.k8s.io/v1_ or _batch/v1_ for example).
+
+
+ It is known that cluster with many nodes and with at least one GPU node may have a lot of _nodefeatures_ objects polluting their etcd space. We are working on a long-term fix but for now manually deleting these objects and downsizing the cluster (or upgrading to a dedicated offer with a bigger etcd quota) is the best solution.
+
+
+* If you have a doubt on space taken by a resource, you can dump it to get its size
+
+ ```sh
+ > kubectl get nodefeature -n kube-system $node-feature-name -o yaml | wc -c
+ 305545 // ~300KiB, big object
+ ```
\ No newline at end of file