Skip to content

Commit 3b842bc

Browse files
authored
Merge pull request #256539 from MicrosoftDocs/main
Publish to live, Sunday 4 AM PST, 10/29
2 parents 16e7553 + 784167a commit 3b842bc

File tree

85 files changed

+1258
-873
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+1258
-873
lines changed

articles/azure-arc/includes/arc-region-note.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ For example: `san-af-<region>-prod.azurewebsites.net` should be `san-af-eastus2-
1212

1313
To see a list of all regions, run this command:
1414

15-
```azure-cli
15+
```azurecli
1616
az account list-locations -o table
1717
```
1818

articles/azure-arc/resource-bridge/upgrade.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrad
7070

7171
Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
7272

73-
For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
73+
For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, then another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This will deploy a new Arc resource bridge using the latest version and reconnect pre-existing Azure resources.
7474

7575
[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
7676

articles/azure-cache-for-redis/cache-how-to-monitor.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -228,6 +228,7 @@ In contrast, for clustered caches, we recommend using the metrics with the suffi
228228
- The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations.
229229
- Used Memory
230230
- The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
231+
- On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
231232
- Used Memory Percentage
232233
- The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. This value doesn't include fragmentation.
233234
- Used Memory RSS

articles/azure-monitor/alerts/alerts-create-new-alert-rule.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ To edit an existing alert rule:
275275
276276
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
277277
278-
1. <a name="custom-props"></a>(Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
278+
1. <a name="custom-props"></a>(Optional) In the **Advanced options** at **Details tab** the **Custom properties** section is added, if you've configured action groups for this alert rule. In this ection you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
279279
280280
The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both.
281281

articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md

Lines changed: 3 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -26,12 +26,12 @@ Four different configmaps can be configured to provide scrape configuration and
2626
* metric keep-lists - this setting is used to control which metrics are listed to be allowed from each default target and to change the default behavior
2727
* scrape intervals for default/pre-definetargets. `30 secs` is the default scrape frequency and it can be changed per default target using this configmap
2828
* debug-mode - turning this ON helps to debug missing metric/ingestion issues - see more on [troubleshooting](prometheus-metrics-troubleshoot.md#debug-mode)
29-
2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap)
29+
2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap) (**Recommended**)
3030
This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster.
31-
3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap)
31+
3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap) (**Advanced**)
3232
This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
3333
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
34-
4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows)
34+
4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows) (**Advanced**)
3535
This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**.
3636
You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
3737

@@ -113,31 +113,7 @@ You can configure the metrics add-on to scrape targets other than the default on
113113

114114
Follow the instructions to [create, validate, and apply the configmap](prometheus-metrics-scrape-validate.md) for your cluster.
115115

116-
### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
117116

118-
The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
119-
120-
The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
121-
122-
Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
123-
124-
```yaml
125-
- job_name: nodesample
126-
scrape_interval: 30s
127-
scheme: http
128-
metrics_path: /metrics
129-
relabel_configs:
130-
- source_labels: [__metrics_path__]
131-
regex: (.*)
132-
target_label: metrics_path
133-
- source_labels: [__address__]
134-
replacement: '$NODE_NAME'
135-
target_label: instance
136-
static_configs:
137-
- targets: ['$NODE_IP:9100']
138-
```
139-
140-
Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
141117

142118
## Prometheus configuration tips and examples
143119

articles/azure-monitor/containers/prometheus-metrics-scrape-validate.md

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ ms.reviewer: aul
1212
In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide more scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape).
1313

1414
The three configmaps that can be used for custom target scraping are -
15-
- ama-metrics-prometheus-config - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.
16-
- ama-metrics-prometheus-config-node - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
17-
- ama-metrics-prometheus-config-node-windows - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
15+
- ama-metrics-prometheus-config (**Recommended**) - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.
16+
- ama-metrics-prometheus-config-node (**Advanced**) - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
17+
- ama-metrics-prometheus-config-node-windows (**Advanced**) - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
1818

1919
## Create Prometheus configuration file
2020

@@ -103,6 +103,32 @@ A sample of the `ama-metrics-prometheus-config` configmap is [here](https://gith
103103
### Troubleshooting
104104
If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replica pod** logs for **ama-metrics-prometheus-config** configmap or **DaemonSet pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
105105

106+
> [!NOTE]
107+
> ### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
108+
>
109+
> The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
110+
>
111+
> The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery/pod annotations. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
112+
>
113+
> Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
114+
>
115+
> Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
116+
```yaml
117+
- job_name: nodesample
118+
scrape_interval: 30s
119+
scheme: http
120+
metrics_path: /metrics
121+
relabel_configs:
122+
- source_labels: [__metrics_path__]
123+
regex: (.*)
124+
target_label: metrics_path
125+
- source_labels: [__address__]
126+
replacement: '$NODE_NAME'
127+
target_label: instance
128+
static_configs:
129+
- targets: ['$NODE_IP:9100']
130+
```
131+
106132
## Next steps
107133
108134
- [Learn more about collecting Prometheus metrics](../essentials/prometheus-metrics-overview.md).

0 commit comments

Comments
 (0)