-
Notifications
You must be signed in to change notification settings - Fork 2.1k
docs: High Availability Setup documentation #2715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -53,6 +53,7 @@ are deleted they are no longer visible on the `/metrics` endpoint. | |
* [Horizontal sharding](#horizontal-sharding) | ||
* [Automated sharding](#automated-sharding) | ||
* [Daemonset sharding for pod metrics](#daemonset-sharding-for-pod-metrics) | ||
* [High Availability](#high-availability) | ||
* [Setup](#setup) | ||
* [Building the Docker container](#building-the-docker-container) | ||
* [Usage](#usage) | ||
|
@@ -304,6 +305,12 @@ spec: | |
|
||
Other metrics can be sharded via [Horizontal sharding](#horizontal-sharding). | ||
|
||
### High Availability | ||
|
||
For high availability, run multiple kube-state-metrics replicas to prevent a single point of failure. A standard setup uses at least 2 replicas, pod anti-affinity rules to ensure they run on different nodes, and a PodDisruptionBudget (PDB) with `minAvailable: 1` to protect against voluntary disruptions. | ||
|
||
When scraping the individual pods directly in an HA setup, Prometheus will ingest duplicate metrics distinguished only by the instance label. This requires you to deduplicate the data in your queries, for example, by using `max without(instance) (your_metric)`. The correct aggregation function (max, sum, avg, etc.) is important and depends on the metric type, as using the wrong one can produce incorrect values for timestamps or during brief state transitions. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is Pod Scraping that common? I would assume most folks will scrape at the service level via a ServiceMonitor / Prometheus-Operator or similar. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah right |
||
|
||
### Setup | ||
|
||
Install this project to your `$GOPATH` using `go get`: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to add an introductory paragraph:
It should mention that multiple replica increase the load on the Kubernetes' API as a trade-off.
It should mention that most likely you don't need HA if you scrape every 30s and you can tolerate a few missing scrapes (which usually is the case)
Does a "standard" setup exist?