You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following examples requires the configuration described in [Send Prometheus metrics to Log Analytics workspace with Container insights](container-insights-prometheus-logs.md).
651
631
652
632
To view Prometheus metrics scraped by Azure Monitor and filtered by namespace, specify *"prometheus"*. Here's a sample query to view Prometheus metrics from the `default` Kubernetes namespace.
653
633
@@ -679,7 +659,7 @@ InsightsMetrics
679
659
680
660
The output will show results similar to the following example.
681
661
682
-

662
+

683
663
684
664
To estimate what each metrics size in GB is for a month to understand if the volume of data ingested received in the workspace is high, the following query is provided.
685
665
@@ -694,7 +674,7 @@ InsightsMetrics
694
674
695
675
The output will show results similar to the following example.
696
676
697
-

677
+

Copy file name to clipboardExpand all lines: articles/azure-monitor/containers/container-insights-prometheus-logs.md
+13-39Lines changed: 13 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,42 +7,16 @@ ms.date: 03/01/2023
7
7
ms.reviewer: aul
8
8
---
9
9
10
-
# Collect Prometheus metrics with Container insights
11
-
[Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that Prometheus typically collects from the cluster without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md).
10
+
# Send Prometheus metrics to Log Analytics workspace with Container insights
11
+
This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log-query.md).
12
12
13
-
Container insights can also scrape your custom Prometheus metrics from your application on your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus (preview). This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram. Metrics sent to the Log Analytics workspace are queried through log queries, whereas Metrics sent through Azure Monitor managed Prometheus are queried through PromQL and Prometheus recording rules and alerts are supported.
13
+
This configuration requires configuring the *monitoring addon* for the Azure Monitor agent, which is the same one used by Container insights to send data to a Log Analytics workspace. It requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring the monitoring addon for the Azure Monitor agent used by Container insights as shown the following diagram.
14
14
15
-
:::image type="content" source="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" lightbox="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" alt-text="Diagram of container monitoring architecture sending Prometheus metrics to Azure Monitor Logs." border="false":::
16
-
17
-
18
-
## Send data to Azure Monitor managed service for Prometheus
19
-
[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This service requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus.
20
-
21
-
> [!TIP]
22
-
> You don't need to enable Container insights to configure your AKS cluster to send data to managed Prometheus. See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on how to configure your cluster without enabling Container insights.
23
-
24
-
25
-
Use the following procedure to add Prometheus collection to your cluster that's already using Container insights.
26
15
27
-
1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster.
28
-
2. Click **Insights**.
29
-
3. Click **Monitor settings**.
30
-
31
-
:::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster.":::
32
-
33
-
4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace.
34
-
5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafana/quickstart-managed-grafana-portal.md) for details on creating a Grafana workspace.
35
-
36
-
:::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster.":::
37
-
38
-
6. Click **Configure** to complete the configuration.
39
-
40
-
See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations-during-enablementdeployment)
16
+
:::image type="content" source="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" lightbox="media/container-insights-prometheus/monitoring-kubernetes-architecture.png" alt-text="Diagram of container monitoring architecture sending Prometheus metrics to Azure Monitor Logs." border="false":::
41
17
42
-
## Send metrics to Azure Monitor Logs
43
-
You may want to collect more data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace.
44
18
45
-
###Prometheus scraping settings (for metrics stored as logs)
19
+
## Prometheus scraping settings (for metrics stored as logs)
46
20
47
21
Active scraping of metrics from Prometheus is performed from one of two perspectives below and metrics are sent to configured log analytics workspace :
48
22
@@ -72,7 +46,7 @@ When a URL is specified, Container insights only scrapes the endpoint. When Kube
72
46
| Node-wide or cluster-wide |`interval`| String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
73
47
| Node-wide or cluster-wide |`fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
74
48
75
-
###Configure ConfigMaps to specify Prometheus scrape configuration (for metrics stored as logs)
49
+
## Configure ConfigMaps to specify Prometheus scrape configuration (for metrics stored as logs)
76
50
Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
77
51
78
52
@@ -81,7 +55,7 @@ Perform the following steps to configure your ConfigMap configuration file for y
81
55
1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
82
56
83
57
84
-
####[Cluster-wide](#tab/cluster-wide)
58
+
### [Cluster-wide](#tab/cluster-wide)
85
59
86
60
To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
87
61
@@ -95,7 +69,7 @@ Perform the following steps to configure your ConfigMap configuration file for y
To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
101
75
@@ -109,7 +83,7 @@ Perform the following steps to configure your ConfigMap configuration file for y
109
83
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
110
84
```
111
85
112
-
#### [DaemonSet](#tab/deamonset)
86
+
### [DaemonSet](#tab/deamonset)
113
87
114
88
To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
115
89
@@ -125,7 +99,7 @@ Perform the following steps to configure your ConfigMap configuration file for y
125
99
126
100
`$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
127
101
128
-
#### [Pod annotation](#tab/pod)
102
+
### [Pod annotation](#tab/pod)
129
103
130
104
To configure scraping of Prometheus metrics by specifying a pod annotation:
131
105
@@ -157,7 +131,7 @@ Perform the following steps to configure your ConfigMap configuration file for y
157
131
The configuration change can take a few minutes to finish before taking effect. All ama-logs pods in the cluster will restart. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
158
132
159
133
160
-
### Verify configuration
134
+
## Verify configuration
161
135
162
136
To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
163
137
@@ -186,11 +160,11 @@ Errors prevent Azure Monitor Agent from parsing the file, causing it to restart
186
160
187
161
For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
188
162
189
-
### Query Prometheus metrics data
163
+
## Query Prometheus metrics data
190
164
191
165
To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
192
166
193
-
### View Prometheus metrics in Grafana
167
+
## View Prometheus metrics in Grafana
194
168
195
169
Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
Copy file name to clipboardExpand all lines: articles/azure-monitor/containers/integrate-keda.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ This article walks you through the steps to integrate KEDA into your AKS cluster
28
28
## Prerequisites
29
29
30
30
+ Azure Kubernetes Service (AKS) cluster
31
-
+ Prometheus sending metrics to an Azure Monitor workspace. For more information, see [Azure Monitor managed service for Prometheus](./prometheus-metrics-overview.md).
31
+
+ Prometheus sending metrics to an Azure Monitor workspace. For more information, see [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).
32
32
33
33
34
34
## Set up a workload identity
@@ -210,7 +210,7 @@ spec:
210
210
authenticationRef:
211
211
name: azure-managed-prometheus-trigger-auth
212
212
```
213
-
+ `serverAddress` is the Query endpoint of your Azure Monitor workspace. For more information, see [Query Prometheus metrics using the API and PromQL](./prometheus-api-promql.md#query-endpoint)
213
+
+ `serverAddress` is the Query endpoint of your Azure Monitor workspace. For more information, see [Query Prometheus metrics using the API and PromQL](../essentials/prometheus-api-promql.md#query-endpoint)
214
214
+ `metricName` is the name of the metric you want to scale on.
215
215
+ `query` is the query used to retrieve the metric.
216
216
+ `threshold` is the value at which the deployment scales.
0 commit comments