You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[workqueue_depth](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L41)| Gauge | Current depth of workqueue. |
8
+
|[workqueue_adds_total](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L47)| Counter | Total number of adds handled by workqueue. |
9
+
|[workqueue_queue_duration_seconds](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L53)| Histogram | How long in seconds an item stays in workqueue before being requested. |
10
+
|[workqueue_work_duration_seconds](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L60)| Histogram | How long in seconds processing an item from workqueue takes. |
11
+
|[workqueue_unfinished_work_seconds](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L67)| Gauge | How many seconds of work has been done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases. |
12
+
|[workqueue_longest_running_processor_seconds](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L76)| Gauge | How many seconds has the longest running processor for workqueue been running. |
13
+
|[workqueue_retries_total](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/workqueue.go#L83)| Counter | Total number of retries handled by workqueue. |
14
+
|[rest_client_requests_total ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/metrics/client_go_adapter.go#L79)| Counter | Number of HTTP requests, partitioned by status code, method, and host. |
15
+
|[controller_runtime_reconcile_total ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/internal/controller/metrics/metrics.go#L30)| Counter | Total number of reconciliations per controller. |
16
+
|[controller_runtime_reconcile_errors_total ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/internal/controller/metrics/metrics.go#L37)| Counter | Total number of reconciliation errors per controller. |
17
+
|[controller_runtime_reconcile_time_seconds ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/internal/controller/metrics/metrics.go#L44)| Histogram | Length of time per reconciliation per controller. |
18
+
|[controller_runtime_max_concurrent_reconciles ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/internal/controller/metrics/metrics.go#L53)| Gauge | Maximum number of concurrent reconciles per controller. |
19
+
|[controller_runtime_active_workers ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/internal/controller/metrics/metrics.go#L60)| Gauge | Number of currently used workers per controller. |
20
+
|[controller_runtime_webhook_latency_seconds ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/webhook/internal/metrics/metrics.go#L31)| Histogram | Histogram of the latency of processing admission requests. |
21
+
|[controller_runtime_webhook_requests_total ](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/webhook/internal/metrics/metrics.go#L40)| Counter | Total number of admission requests by HTTP status code. |
22
+
|[controller_runtime_webhook_requests_in_flight](https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.0/pkg/webhook/internal/metrics/metrics.go#L51)| Gauge | Current number of admission requests being served. |
You can also apply the following `ClusterRoleBinding`:
28
+
27
29
```yaml
28
30
apiVersion: rbac.authorization.k8s.io/v1
29
31
kind: ClusterRoleBinding
@@ -38,18 +40,19 @@ subjects:
38
40
name: <prometheus-service-account>
39
41
namespace: <prometheus-service-account-namespace>
40
42
```
43
+
41
44
The `prometheus-k8s-role` referenced here should provide the necessary permissions to allow prometheus scrape metrics from operator pods.
42
45
43
46
## Exporting Metrics for Prometheus
44
47
45
48
Follow the steps below to export the metrics using the Prometheus Operator:
46
49
47
50
1. Install Prometheus and Prometheus Operator.
48
-
We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing)
49
-
in production if you don't have your own monitoring system.
50
-
If you are just experimenting, you can only install Prometheus and Prometheus Operator.
51
+
We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing)
52
+
in production if you don't have your own monitoring system.
53
+
If you are just experimenting, you can only install Prometheus and Prometheus Operator.
51
54
2. Uncomment the line `- ../prometheus` in the `config/default/kustomization.yaml`.
52
-
It creates the `ServiceMonitor` resource which enables exporting the metrics.
55
+
It creates the `ServiceMonitor` resource which enables exporting the metrics.
53
56
54
57
```yaml
55
58
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
@@ -116,13 +119,11 @@ reconcile loop. These metrics can be evaluated from anywhere in the operator cod
116
119
<aside class="note">
117
120
<h2>Enabling metrics in Prometheus UI</h1>
118
121
119
-
In order to publish metrics and view them on the Prometheus UI, the Prometheus instance would have to be configured to select the Service Monitor instance based on its labels.
122
+
In order to publish metrics and view them on the Prometheus UI, the Prometheus instance would have to be configured to select the Service Monitor instance based on its labels.
120
123
121
124
</aside>
122
125
123
126
Those metrics will be available for prometheus or
124
127
other openmetrics systems to scrape.
125
128
126
129

0 commit comments