Skip to content
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ are deleted they are no longer visible on the `/metrics` endpoint.
* [Horizontal sharding](#horizontal-sharding)
* [Automated sharding](#automated-sharding)
* [Daemonset sharding for pod metrics](#daemonset-sharding-for-pod-metrics)
* [High Availability](#high-availability)
* [Setup](#setup)
* [Building the Docker container](#building-the-docker-container)
* [Usage](#usage)
Expand Down Expand Up @@ -304,6 +305,50 @@ spec:

Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).

### High Availability

Multiple replicas increase the load on the Kubernetes API as a trade-off. Most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Multiple replicas increase the load on the Kubernetes API as a trade-off. Most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case).
Kube-state-metrics is a stateless service that reads from the Kubernetes API server. Be aware that multiple replicas increase the load on the Kubernetes API. Therefore, in most cases a single replica is also an option since most users scrape with a 30s interval. If you have the need for a higher scrape frequency or you have other constraints that require multiple replica, you can increase the availablity of kube-state-metrics in the following way:


For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A common setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions.

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
spec:
replicas: 2
template:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: kube-state-metrics
topologyKey: kubernetes.io/hostname
containers:
- name: kube-state-metrics
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0
ports:
- containerPort: 8080
name: http-metrics
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: kube-state-metrics-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: kube-state-metrics
```

Most users will scrape at the service level via a ServiceMonitor / Prometheus-Operator or similar. When scraping the individual pods directly in an HA setup, Prometheus will ingest duplicate metrics distinguished only by the instance label. This requires you to deduplicate the data in your queries, for example, by using `max without(instance) (your_metric)`. The correct aggregation function (max, sum, avg, etc.) is important and depends on the metric type, as using the wrong one can produce incorrect values for timestamps or during brief state transitions.

### Setup

Install this project to your `$GOPATH` using `go get`:
Expand Down
45 changes: 45 additions & 0 deletions README.md.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ are deleted they are no longer visible on the `/metrics` endpoint.
* [Horizontal sharding](#horizontal-sharding)
* [Automated sharding](#automated-sharding)
* [Daemonset sharding for pod metrics](#daemonset-sharding-for-pod-metrics)
* [High Availability](#high-availability)
* [Setup](#setup)
* [Building the Docker container](#building-the-docker-container)
* [Usage](#usage)
Expand Down Expand Up @@ -305,6 +306,50 @@ spec:

Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding).

### High Availability

Multiple replicas increase the load on the Kubernetes API as a trade-off. Most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case).

For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A common setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions.

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
spec:
replicas: 2
template:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: kube-state-metrics
topologyKey: kubernetes.io/hostname
containers:
- name: kube-state-metrics
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.0
ports:
- containerPort: 8080
name: http-metrics
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: kube-state-metrics-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: kube-state-metrics
```

Most users will scrape at the service level via a ServiceMonitor / Prometheus-Operator or similar. When scraping the individual pods directly in an HA setup, Prometheus will ingest duplicate metrics distinguished only by the instance label. This requires you to deduplicate the data in your queries, for example, by using `max without(instance) (your_metric)`. The correct aggregation function (max, sum, avg, etc.) is important and depends on the metric type, as using the wrong one can produce incorrect values for timestamps or during brief state transitions.

### Setup

Install this project to your `$GOPATH` using `go get`:
Expand Down