You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -248,6 +248,9 @@ relabelings:
248
248
targetLabel: instance
249
249
```
250
250
251
+
> [!NOTE]
252
+
> If you have relabeling configs, ensure that the relabeling does not filter out the targets, and the labels configured correctly match the targets.
253
+
251
254
### Metric Relabelings
252
255
253
256
Metric relabelings are applied after scraping and before ingestion. Use the `metricRelabelings` section to filter metrics after scraping. The following examples show how to do so.
Copy file name to clipboardExpand all lines: articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,8 @@ Replica pod scrapes metrics from `kube-state-metrics`, custom scrape targets in
16
16
17
17
If you encounter an error while you attempt to enable monitoring for your AKS cluster, please follow the instructions mentioned [here](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis of for any configuration issues on your cluster and you can ch the generated files while creating a support request for faster resolution for your support case.
18
18
19
+
## Missing metrics
20
+
19
21
## Metrics Throttling
20
22
21
23
In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%.
@@ -53,6 +55,10 @@ kubectl describe pod <ama-metrics pod name> -n kube-system
53
55
54
56
If the pods are running as expected, the next place to check is the container logs.
55
57
58
+
## Check for relabeling configs
59
+
60
+
If metrics are missing, you can also check if you have relabeling configs. With relabeling configs, ensure that the relabeling does not filter out the targets, and the labels configured correctly match the targets. Refer to [Prometheus relabel config documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) for more details.
61
+
56
62
## Container logs
57
63
View the container logs with the following command:
58
64
@@ -159,7 +165,7 @@ Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped b
159
165
160
166
## Metric names, label names & label values
161
167
162
-
Agent based scraping currently has the limitations in the following table:
168
+
Metrics scraping currently has the limitations in the following table:
163
169
164
170
| Property | Limit |
165
171
|:---|:---|
@@ -180,7 +186,7 @@ If you see metrics missed, you can first check if the ingestion limits are being
180
186
- Events Per Minute Ingested Limit - The maximum number of events per minute that can be ingested before getting throttled
181
187
- Events Per Minute Ingested % Utilization - The percentage of current metric ingestion rate limit being util
182
188
183
-
To avoid metrics ingestion throttling, you can monitor and set up an alert on the ingestion limits. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota).
189
+
To avoid metrics ingestion throttling, you can **monitor and set up an alert on the ingestion limits**. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota).
184
190
185
191
Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal.
0 commit comments