You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,9 +42,9 @@ If you want to turn on the scraping of the default targets that aren't enabled b
42
42
### Customizing metrics collected by default targets
43
43
By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, in the configmap under `default-targets-metrics-keep-list`, set `minimalingestionprofile` to `false`.
44
44
45
-
To filter in additional metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
45
+
To filter in more metrics for any default targets, edit the settings under `default-targets-metrics-keep-list` for the corresponding job you'd like to change.
46
46
47
-
For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
47
+
For example, `kubelet` is the metric filtering setting for the default target kubelet. Use the following to filter IN metrics collected for the default targets using regex based filtering.
48
48
49
49
```
50
50
kubelet = "metricX|metricY"
@@ -57,7 +57,7 @@ apiserver = "mymetric.*"
57
57
To further customize the default jobs to change properties such as collection frequency or labels, disable the corresponding default target by setting the configmap value for the target to `false`, and then apply the job using custom configmap. For details on custom configuration, see [Customize scraping of Prometheus metrics in Azure Monitor](prometheus-metrics-scrape-configuration.md#configure-custom-prometheus-scrape-jobs).
58
58
59
59
### Cluster alias
60
-
The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
60
+
The cluster label appended to every time series scraped will use the last part of the full AKS cluster's ARM resourceID. For example, if the resource ID is `/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg-name/providers/Microsoft.ContainerService/managedClusters/clustername`, the cluster label is `clustername`.
61
61
62
62
To override the cluster label in the time series scraped, update the setting `cluster_alias` to any string under `prometheus-collector-settings` in the `ama-metrics-settings-configmap`[configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one.
63
63
@@ -66,9 +66,12 @@ The new label will also show up in the cluster parameter dropdown in the Grafana
66
66
> [!NOTE]
67
67
> Only alphanumeric characters are allowed. Any other characters else will be replaced with `_`. This is to ensure that different components that consume this label will adhere to the basic alphanumeric convention.
68
68
69
-
### Debug mode
69
+
### Debug mode
70
70
To view every metric that is being scraped for debugging purposes, the metrics addon agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in `ama-metrics-settings-configmap`[configmap](https://aka.ms/azureprometheus-addon-settings-configmap). You can either create this configmap or edit an existing one. See [the Debug Mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode) for more details.
71
71
72
+
### Scrape interval settings
73
+
To update the scrape interval settings for any target, the customer can update the duration in default-targets-scrape-interval-settings setting for that target in `ama-metrics-settings-configmap`[configmap](https://aka.ms/azureprometheus-addon-settings-configmap). The scrape intervals have to be set by customer in the correct format specified [here](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file), else the default value of 30 seconds will be applied to the corresponding targets.
74
+
72
75
## Configure custom Prometheus scrape jobs
73
76
74
77
You can configure the metrics addon to scrape targets other than the default ones, using the same configuration format as the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
@@ -77,7 +80,7 @@ Follow the instructions to [create, validate, and apply the configmap](prometheu
77
80
78
81
### Advanced Setup: Configure custom Prometheus scrape jobs for the daemonset
79
82
80
-
The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery; otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
83
+
The `ama-metrics` replicaset pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` replicaset pod to the `ama-metrics` daemonset pod. The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), similar to the regular configmap, can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise each node will try to scrape all targets and will make many calls to the Kubernetes API server. The `node-exporter` config below is one of the default targets for the daemonset pods. It uses the `$NODE_IP` environment variable, which is already set for every ama-metrics addon container to target a specific port on the node:
81
84
82
85
```yaml
83
86
- job_name: node
@@ -115,7 +118,7 @@ scrape_configs:
115
118
- <job-y>
116
119
```
117
120
118
-
Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
121
+
Any other unsupported sections need to be removed from the config before applying as a configmap. Otherwise the custom configuration will fail validation and won't be applied.
119
122
120
123
Refer to [Apply config file](prometheus-metrics-scrape-validate.md#apply-config-file) section to create a configmap from the prometheus config.
121
124
@@ -138,15 +141,15 @@ scrape_configs:
138
141
139
142
#### Kubernetes Service Discovery config
140
143
141
-
Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. These can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
144
+
Targets discovered using [`kubernetes_sd_configs`](https://aka.ms/azureprometheus-promioconfig-sdk8s) will each have different `__meta_*` labels depending on what role is specified. The labels can be used in the `relabel_configs` section to filter targets or replace labels for the targets.
142
145
143
146
See the [Prometheus examples](https://aka.ms/azureprometheus-promsampleossconfig) of scrape configs for a Kubernetes cluster.
144
147
145
148
### Relabel configs
146
149
The `relabel_configs` section is applied at the time of target discovery and applies to each target for the job. Below are examples showing ways to use `relabel_configs`.
147
150
148
151
#### Adding a label
149
-
Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist. This will add the label for every target of the job.
152
+
Add a new label called `example_label` with value `example_value` to every metric of the job. Use `__address__` as the source label only because that label will always exist and will add the label for every target of the job.
150
153
151
154
```yaml
152
155
relabel_configs:
@@ -268,7 +271,7 @@ The scrape config below uses the `__meta_*` labels added from the `kubernetes_sd
268
271
269
272
To scrape certain pods, specify the port, path, and scheme through annotations for the pod and the below job will scrape only the address specified by the annotation:
270
273
- `prometheus.io/scrape`: Enable scraping for this pod
271
-
- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set this to `https` & most likely set the tls config.
274
+
- `prometheus.io/scheme`: If the metrics endpoint is secured, then you'll need to set scheme to `https` & most likely set the tls config.
272
275
- `prometheus.io/path`: If the metrics path isn't /metrics, define it with this annotation.
273
276
- `prometheus.io/port`: Specify a single, desired port to scrape
274
277
@@ -297,7 +300,7 @@ scrape_configs:
297
300
regex: ([^:]+)(?::\d+)?;(\d+)
298
301
replacement: $1:$2
299
302
target_label: __address__
300
-
303
+
301
304
# If prometheus.io/scheme is specified, scrape with this scheme instead of http
0 commit comments