Skip to content

Commit 10e7039

Browse files
committed
update resource metrics pipeline section
1 parent 45061ef commit 10e7039

File tree

2 files changed

+171
-47
lines changed

2 files changed

+171
-47
lines changed

content/en/docs/concepts/configuration/manage-resources-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,7 +231,7 @@ The kubelet reports the resource usage of a Pod as part of the Pod
231231

232232
If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
233233
are available in your cluster, then Pod resource usage can be retrieved either
234-
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
234+
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-api)
235235
directly or from your monitoring tools.
236236

237237
## Local ephemeral storage
Lines changed: 170 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -1,76 +1,200 @@
11
---
22
reviewers:
33
- fgrzadkowski
4-
- piosz
5-
title: Resource metrics pipeline
4+
- piosz
5+
title: Resource metrics pipeline
66
content_type: concept
77
---
88

99
<!-- overview -->
1010

11-
Resource usage metrics, such as container CPU and memory usage,
12-
are available in Kubernetes through the Metrics API. These metrics can be accessed either directly
13-
by the user with the `kubectl top` command, or by a controller in the cluster, for example
14-
Horizontal Pod Autoscaler, to make decisions.
11+
For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and similar use cases.
12+
This API makes information available about resource usage for node and pod, including metrics for CPU and memory.
13+
If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information, and
14+
you can use Kubernetes' access control mechanisms to manage permissions to do so.
1515

16-
<!-- body -->
16+
The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) use data from the metrics API to adjust workload replicas and resources to meet customer demand.
1717

18-
## The Metrics API
18+
You can also view the resource metrics using the [`kubectl top`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top) command.
1919

20-
Through the Metrics API, you can get the amount of resource currently used
21-
by a given node or a given pod. This API doesn't store the metric values,
22-
so it's not possible, for example, to get the amount of resources used by a
23-
given node 10 minutes ago.
20+
{{< note >}}
21+
The Metrics API, and the metrics pipeline that it enables, only offers the minimum
22+
CPU and memory metrics to enable automatic scaling using HPA and / or VPA.
23+
If you would like to provide a more complete set of metrics, you can complement
24+
the simpler Metrics API by deploying a second
25+
[metrics pipeline](/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline)
26+
that uses the _Custom Metrics API_.
27+
{{< /note >}}
2428

25-
The API is no different from any other API:
2629

27-
- it is discoverable through the same endpoint as the other Kubernetes APIs under the path: `/apis/metrics.k8s.io/`
28-
- it offers the same security, scalability, and reliability guarantees
30+
Figure 1 illustrates the architecture of the resource metrics pipeline.
31+
32+
{{< mermaid >}}
33+
flowchart RL
34+
subgraph cluster[Cluster]
35+
direction RL
36+
S[ <br><br> ]
37+
A[Metrics-<br>Server]
38+
subgraph B[Nodes]
39+
direction TB
40+
D[cAdvisor] --> C[kubelet]
41+
E[Container<br>runtime] --> D
42+
E1[Container<br>runtime] --> D
43+
P[pod data] -.- C
44+
end
45+
L[API<br>server]
46+
W[HPA]
47+
C ---->|Summary<br>API| A -->|metrics<br>API| L --> W
48+
end
49+
L ---> K[kubectl<br>top]
50+
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
51+
class W,B,P,K,cluster,D,E,E1 box
52+
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
53+
class S spacewhite
54+
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff;
55+
class A,L,C k8s
56+
{{< /mermaid >}}
57+
58+
Figure 1. Resource Metrics Pipeline
59+
60+
The architecture components, from right to left in the figure, consist of the following:
61+
62+
* [cAdvisor](https://github.com/google/cadvisor) - Daemon for collecting, aggregating and exposing container metrics included in Kubelet. cAdvisor reads metrics from cgroups allowing out of the box support for Docker. Note that non-container runtimes need to support [Container Metrics RPS](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) for metrics to be available.
63+
* [kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet) - Node agent for managing container resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet API endpoints.
64+
* [Summary API](#summary-api-source) - Kubelet API for discovering and retrieving per-node summarized stats available through the `/stats` endpoint.
65+
* [metrics-server](#metrics-server) - Cluster addon component that collects and aggregates resource metrics pulled from each kubelet. The K8s API server serves up the Metric API endpoints for use by HPA, VPA and the `kubectl top` command.
66+
* [Metrics API](#metrics-api) - Kubernetes API supporting access to CPU and memory used for workload autoscaling. Metrics Server is the default implementation provided with many popular K8s distributions, however it can be replaced by alternative adapters based on your preferred monitoring solution.
67+
68+
<!-- body -->
2969

30-
The API is defined in [k8s.io/metrics](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go)
31-
repository. You can find more information about the API there.
70+
## Metrics API
71+
72+
The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s autoscaler components.
73+
74+
Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading:
75+
```shell
76+
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/minikube" | jq '.'
77+
```
78+
79+
Here is the same API call using `curl`:
80+
```shell
81+
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube
82+
```
83+
Sample reply:
84+
```json
85+
{
86+
"kind": "NodeMetrics",
87+
"apiVersion": "metrics.k8s.io/v1beta1",
88+
"metadata": {
89+
"name": "minikube",
90+
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/minikube",
91+
"creationTimestamp": "2022-01-27T18:48:43Z"
92+
},
93+
"timestamp": "2022-01-27T18:48:33Z",
94+
"window": "30s",
95+
"usage": {
96+
"cpu": "487558164n",
97+
"memory": "732212Ki"
98+
}
99+
}
100+
```
101+
Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the `kube-system` namespace and piped through `jq` for easier reading:
102+
103+
```shell
104+
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube" | jq '.'
105+
```
106+
Here is the same API call using `curl`:
107+
```shell
108+
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube
109+
```
110+
Sample reply:
111+
```json
112+
{
113+
"kind": "PodMetrics",
114+
"apiVersion": "metrics.k8s.io/v1beta1",
115+
"metadata": {
116+
"name": "kube-scheduler-minikube",
117+
"namespace": "kube-system",
118+
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube",
119+
"creationTimestamp": "2022-01-27T19:25:00Z"
120+
},
121+
"timestamp": "2022-01-27T19:24:31Z",
122+
"window": "30s",
123+
"containers": [
124+
{
125+
"name": "kube-scheduler",
126+
"usage": {
127+
"cpu": "9559630n",
128+
"memory": "22244Ki"
129+
}
130+
}
131+
]
132+
}
133+
```
134+
135+
The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics) repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) for the `metrics.k8s.io` API.
136+
137+
To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md), the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the [resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api).
138+
139+
140+
{{< note >}} You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able to access it. {{< /note >}}
141+
142+
## Measuring resource usage
32143

33-
{{< note >}}
34-
The API requires the metrics server to be deployed in the cluster. Otherwise it will be not available.
35-
{{< /note >}}
144+
### CPU
36145

37-
## Measuring Resource Usage
146+
CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors.
38147

39-
### CPU
148+
This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The time window used to calculate CPU is shown under window field in Metrics API.
40149

41-
CPU is reported as the average usage, in
42-
[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu),
43-
over a period of time. This value is derived by taking a rate over a cumulative CPU counter
44-
provided by the kernel (in both Linux and Windows kernels).
45-
The kubelet chooses the window for the rate calculation.
150+
To learn more about how Kubernetes allocates and measures CPU resources, see [meaning of CPU](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu).
46151

47152
### Memory
48153

49-
Memory is reported as the working set, in bytes, at the instant the metric was collected.
50-
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
51-
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
52-
It includes all anonymous (non-file-backed) memory since Kubernetes does not support swap.
53-
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
154+
Memory is reported as the working set, measured in bytes, at the instant the metric was collected.
155+
156+
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
157+
158+
The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
159+
160+
To learn more about how Kubernetes allocates and measures memory resources, see [meaning of memory](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory).
54161

55162
## Metrics Server
56163

57-
[Metrics Server](https://github.com/kubernetes-sigs/metrics-server) is a cluster-wide aggregator of resource usage data.
58-
By default, it is deployed in clusters created by `kube-up.sh` script
59-
as a Deployment object. If you use a different Kubernetes setup mechanism, you can deploy it using the provided
60-
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
164+
The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA. You can also view these metrics using the `kubectl top` command.
165+
166+
The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an internal view of pod metadata, and keeps a cache of pod health. That cached pod health information is available via the extension API that the metrics-server makes available.
167+
168+
For example with an HPA query, the metrics-server needs to identify which pods fulfill the label selectors in the deployment.
169+
170+
The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API to collect metrics from each node. Depending on the metrics-server version it uses:
171+
* Metrics resource endpoint `/metrics/resource` in version v0.6.0+ or
172+
* Summary API endpoint `/stats/summary` in older versions
173+
61174

62-
Metrics Server collects metrics from the Summary API, exposed by
63-
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node, and is registered with the main API server via
64-
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
175+
To learn more about the metrics-server, see the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server).
65176

66-
Learn more about the metrics server in
67-
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
177+
You can also check out the following:
68178

69-
### Summary API Source
70-
The [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at node, volume, pod and container level, and emits their statistics in
179+
* [metrics-server design](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md)
180+
* [metrics-server FAQ](https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md)
181+
* [metrics-server known issues](https://github.com/kubernetes-sigs/metrics-server/blob/master/KNOWN_ISSUES.md)
182+
* [metrics-server releases](https://github.com/kubernetes-sigs/metrics-server/releases)
183+
* [Horizontal Pod Autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale/)
184+
185+
### Summary API source
186+
187+
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node, volume, pod and container level, and emits this information in
71188
the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)
72189
for consumers to read.
73190

74-
Pre-1.23, these resources have been primarily gathered from [cAdvisor](https://github.com/google/cadvisor). However, in 1.23 with the
75-
introduction of the `PodAndContainerStatsFromCRI` FeatureGate, container and pod level stats can be gathered by the CRI implementation.
76-
Note: this also requires support from the CRI implementations (containerd >= 1.6.0, CRI-O >= 1.23.0).
191+
Here is an example of a Summary API request for a `minikube` node:
192+
193+
194+
```shell
195+
kubectl get --raw "/api/v1/nodes/minikube/proxy/stats/summary"
196+
```
197+
Here is the same API call using `curl`:
198+
```shell
199+
curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary
200+
```

0 commit comments

Comments
 (0)