Skip to content

Commit 1dc2eb9

Browse files
Merge pull request #304220 from MicrosoftDocs/main
Auto Publish – main to live - 2025-08-14 17:00 UTC
2 parents 0ba90af + 85c18f4 commit 1dc2eb9

29 files changed

+665
-520
lines changed

articles/application-gateway/redirect-external-site-powershell.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: application-gateway
66
author: mbender-ms
77
ms.service: azure-application-gateway
88
ms.topic: how-to
9-
ms.date: 09/24/2020
9+
ms.date: 07/11/2025
1010
ms.author: mbender
1111
ms.custom: devx-track-azurepowershell
1212
# Customer intent: As a cloud architect, I want to configure an application gateway for external web traffic redirection using PowerShell, so that I can efficiently manage and route traffic to external sites from my Azure environment.
@@ -28,7 +28,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
2828

2929
[!INCLUDE [cloud-shell-try-it.md](~/reusable-content/ce-skilling/azure/includes/cloud-shell-try-it.md)]
3030

31-
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
31+
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version Az 5.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
3232

3333
## Create a resource group
3434

@@ -61,7 +61,7 @@ $pip = New-AzPublicIpAddress `
6161

6262
## Create an application gateway
6363

64-
### Create the IP configurations and frontend port
64+
### Create the frontend IP configurations and frontend port
6565

6666
Associate *myAGSubnet* that you previously created to the application gateway using [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration). Assign the public IP address to the application gateway using [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig). And then you can create the HTTP port using [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport).
6767

articles/application-gateway/tutorial-ingress-controller-add-on-new.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,8 +60,6 @@ az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-ma
6060
> [!NOTE]
6161
> Please ensure the identity used by AGIC has the proper permissions. A list of permissions needed by the identity can be found here: [Configure Infrastructure - Permissions](configuration-infrastructure.md#permissions). If a custom role is not defined with the required permissions, you may use the _Network Contributor_ role.
6262
63-
>[!NOTE]
64-
>If you are planning on using AGIC with an AKS cluster using CNI Overlay, specify the parameter `--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/AppGatewayWithOverlayPreview` to configure AGIC to handle connectivity to the CNI Overlay enabled cluster.
6563

6664
```azurecli-interactive
6765
# Get application gateway id from AKS addon profile

articles/azure-app-configuration/configuration-provider-overview.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -56,21 +56,21 @@ Dynamic Refresh (Collection Monitoring) | [GA](./reference-dotnet-provider.md#co
5656
JSON Content Type Handling | [GA](./reference-dotnet-provider.md#json-content-type-handling) | GA | GA | GA | [GA](./reference-javascript-provider.md#json-content-type-handling) | GA
5757
Configuration Setting Mapping | [GA](./reference-dotnet-provider.md#configuration-setting-mapping) | N/A | N/A | N/A | N/A | N/A
5858
Key Vault References | [GA](./reference-dotnet-provider.md#key-vault-reference) | GA | GA | GA | [GA](./reference-javascript-provider.md#key-vault-reference) | GA
59-
Key Vault Secret Refresh | [GA](./reference-dotnet-provider.md#key-vault-secret-refresh) | WIP | GA | WIP | WIP | GA
59+
Key Vault Secret Refresh | [GA](./reference-dotnet-provider.md#key-vault-secret-refresh) | N/A | GA | WIP | WIP | GA
6060
Custom Key Vault Secret Resolution | [GA](./reference-dotnet-provider.md#key-vault-reference) | GA | GA | GA | [GA](./reference-javascript-provider.md#key-vault-reference) | GA
61-
Parallel Secret Resolution | WIP | WIP | WIP | WIP | [GA](./reference-javascript-provider.md#parallel-secret-resolution) | GA
61+
Parallel Secret Resolution | WIP | N/A | WIP | N/A | [GA](./reference-javascript-provider.md#parallel-secret-resolution) | GA
6262
Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | GA | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
63-
Variant Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | GA | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
64-
Feature Flag Telemetry | GA | GA | WIP | GA | GA | WIP
63+
Variant Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | Preview | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
64+
Feature Flag Telemetry | GA | Preview | WIP | GA | GA | WIP
6565
Key Prefix Trim | [GA](./reference-dotnet-provider.md#trim-prefix-from-keys) | GA | GA | GA | [GA](./reference-javascript-provider.md#trim-prefix-from-keys) | GA
6666
Configurable Startup Time-out | [GA](./reference-dotnet-provider.md#startup-retry) | WIP | N/A | WIP | [GA](./reference-javascript-provider.md#startup-retry) | WIP
67-
Replica Auto Discovery | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | WIP | [GA](./reference-javascript-provider.md#geo-replication) | WIP
68-
Replica Failover | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | WIP | [GA](./reference-javascript-provider.md#geo-replication) | WIP
69-
Replica Load Balancing | [GA](./reference-dotnet-provider.md#geo-replication) | WIP | GA | WIP | [GA](./reference-javascript-provider.md#geo-replication) | WIP
67+
Replica Auto Discovery | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
68+
Replica Failover | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
69+
Replica Load Balancing | [GA](./reference-dotnet-provider.md#geo-replication) | WIP | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
7070
Snapshots | [GA](./reference-dotnet-provider.md#snapshot) | GA | GA | WIP | [GA](./reference-javascript-provider.md#snapshot) | WIP
7171
Distributed Tracing | [GA](./reference-dotnet-provider.md#distributed-tracing) | WIP | WIP | WIP | WIP | WIP
72-
Health Check | WIP | WIP | WIP | WIP | WIP | WIP
73-
Select by Tag Filters | [GA](./reference-dotnet-provider.md#load-specific-key-values-using-selectors) | WIP | WIP | WIP | WIP | WIP
72+
Health Check | WIP | GA | WIP | WIP | WIP | WIP
73+
Select by Tag Filters | [GA](./reference-dotnet-provider.md#load-specific-key-values-using-selectors) | WIP | WIP | GA | WIP | WIP
7474

7575
## Support policy
7676

articles/azure-netapp-files/azure-netapp-files-service-levels.md

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: concept-article
8-
ms.date: 03/25/2025
8+
ms.date: 08/14/2025
99
ms.author: anfdocs
1010
# Customer intent: "As a cloud storage administrator, I want to understand the throughput capabilities of different service levels in Azure NetApp Files, so that I can choose the right configuration to meet my application's performance requirements."
1111
---
1212
# Service levels for Azure NetApp Files
1313

14-
Service levels are an attribute of a capacity pool. Service levels are defined and differentiated by the allowed maximum throughput for a volume in the capacity pool based on the quota that is assigned to the volume. Throughput is a combination of read and write speed.
14+
Service levels are an attribute of a capacity pool. Service levels are defined and differentiated by the allowed maximum throughput for a volume in the capacity pool based on the quota assigned to the volume. Throughput is a combination of read and write speed.
1515

1616
## Supported service levels
1717

@@ -28,13 +28,18 @@ Azure NetApp Files supports four service levels: *Standard*, *Premium*, *Ultra*,
2828

2929
* <a name="Flexible"></a>Flexible storage (preview):
3030

31-
The Flexible service level enables you to adjust throughput and size limits independently. This service level is designed for demanding applications such as Oracle or SAP HANA. You can also use the Flexible service level to create high-capacity volumes with (relatively) low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The minimum throughput to be assigned to a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The first 128 MiB/s of throughput, known as the baseline, is included in the Flexible service level. The maximum throughput is 5 x 128 MiB/second/TiB x the size of the capacity pool in TiB. For more information see [Flexible service level throughput examples](#flexible-examples). You can assign throughput and capacity to volumes that are part of a Flexible capacity pool in the same way you do volumes that are part of a manual QoS capacity pool of any service level. Cool access isn't currently supported with the Flexible service level.
31+
The Flexible service level enables you to adjust throughput and size limits independently. You can use the Flexible service level to create high-capacity volumes with low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The Flexible service level is designed for demanding applications such as Oracle or SAP HANA.
32+
33+
The minimum throughput you can assign a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The maximum throughput is 5 x 128 MiB/second/TiB x the size of the capacity pool in TiB. For more information, see [Flexible service level throughput examples](#flexible-examples) and [considerations for the Flexible service level](azure-netapp-files-set-up-capacity-pool.md#considerations).
3234

3335
>[!IMPORTANT]
3436
>The Flexible service level is only supported for new _manual QoS_ capacity pools.
3537
3638
* Storage with cool access:
37-
Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
39+
[Cool access storage](manage-cool-access.md#register-the-feature) is available with the Standard, Premium, Ultra, and Flexible service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. Throughput experiences differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
40+
41+
>[!NOTE]
42+
>Cool access pricing is calculated in the same manner for all service levels (Standard, Premium, Ultra, and Flexible).
3843
3944
## Throughput limits
4045

@@ -51,11 +56,11 @@ The following diagram shows throughput limit examples of volumes in an auto QoS
5156

5257
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
5358

54-
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
59+
* In Example 2, a volume from an auto QoS capacity pool at the Premium service level with 100 GiB of quota has an assigned throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
5560

5661
### Throughput limit examples of volumes in a manual QoS capacity pool
5762

58-
If you use a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. When you create a volume in a manual QoS capacity pool, you can specify the throughput (MiB/S) value. The total throughput assigned to volumes in a manual QoS capacity pool depends on the size of the pool and the service level. Throughput limits for the Standard, Premium, and Ultra service levels are capped by a formula: capacity pool size in TiB x service level throughput/TiB. For instance, a 10-TiB capacity pool with the Ultra service level has a total throughput capacity of 1,280 MiB/s (10 TiB x 128 MiB/s/TiB) available for the volumes. For the Flexible service level, the formula is 5 x capacity pool size in TiB x minimum service level throughput (128 MiB/s/TiB). For examples, see [Flexible service level throughput examples](#flexible-examples).
63+
If you use a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. When you create a volume in a manual QoS capacity pool, you can specify the throughput (MiB/S) value. The total throughput assigned to volumes in a manual QoS capacity pool depends on the size of the pool and the service level. Throughput limits for the Standard, Premium, and Ultra service levels are capped by a formula: capacity pool size in TiB x service level throughput/TiB. For instance, a 10-TiB capacity pool with the Ultra service level has a total throughput capacity of 1,280 MiB/s (10 TiB x 128 MiB/s/TiB) available for the volumes. For the Flexible service level, the formula is 5 x capacity pool size in TiB x minimum service level throughput (128 MiB/s/TiB). For example, see [Flexible service level throughput examples](#flexible-examples).
5964

6065
For example, for an SAP HANA system, this capacity pool can be used to create the following volumes. Each volume provides the individual size and throughput to meet your application requirements:
6166

@@ -79,12 +84,12 @@ The example extends to the Flexible service level as well. A Flexible service le
7984
- SAP HANA shared volume: Size 1 TiB with up to 64 MiB/s
8085
- SAP HANA backup volume: Size 4.5 TiB with up to 384 MiB/s
8186

82-
As illustrated in the diagram, the SAP HANA backup volume received the 128MiB/s additional baseline throughput.
87+
As illustrated in the diagram, the SAP HANA backup volume receives baseline throughput of 128 MiB/s.
8388

8489
#### <a name="flexible-examples">Flexible service level throughput examples:</a>
8590

8691
| Flexible pool size (TiB) | Allowable throughput minimum (MiB/s) | Allowable throughput maximum (MiB/s) |
87-
| - | - | -- |
92+
|-|-|--|
8893
| 1 | 128 | 5 * 128 * 1 = 640 |
8994
| 2 | 128 | 5 * 128 * 2 = 1,280 |
9095
| 10 | 128 | 5 * 128 * 10 = 6,400 |
@@ -93,7 +98,7 @@ As illustrated in the diagram, the SAP HANA backup volume received the 128MiB/s
9398
| 1,024 | 128 | 5 * 128 * 1,024 = 655,360 |
9499

95100
>[!NOTE]
96-
>A baseline throughput of 128 MiB/s is provided for every pool at no additional cost.
101+
>Azure NetApp Files ensures a consistent baseline throughput regardless of capacity pool size. For example, both a 1-TiB capacity pool and a 10-TiB capacity pool receive the same complimentary baseline throughput of 128 MiB/s.
97102
98103
## Next steps
99104

articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: how-to
8-
ms.date: 05/14/2025
8+
ms.date: 08/14/2025
99
ms.author: anfdocs
1010
ms.custom:
1111
- build-2025
@@ -25,7 +25,7 @@ Creating a capacity pool enables you to create volumes within it.
2525
>[!IMPORTANT]
2626
>To create a 1-TiB capacity pool with a tag, you must use API versions `2023-07-01_preview` to `2024-01-01_preview` or stable releases from `2024-01-01`.
2727
* The Standard, Premium, and Ultra service levels are generally available (GA). No registration is required.
28-
* The **Flexible** service level is currently in preview and supported in all Azure NetApp Files regions. You must register the feature before using it for the first time:
28+
* <a name="flexible"></a> The **Flexible** service level is currently in preview and supported in all Azure NetApp Files regions. You must register the feature before using it for the first time:
2929

3030
1. Register the feature:
3131

@@ -49,7 +49,6 @@ Creating a capacity pool enables you to create volumes within it.
4949
* Flexible service level capacity pools can't be converted to the Standard, Premium, or Ultra service level.
5050
* The minimum throughput for Flexible service level capacity pools is 128 MiB/second. Maximum throughput is calculated based on the size of the capacity pool using the formula 5 x 128 MiB/second/TiB x capacity pool size in TiB. If your capacity pool is 1 TiB, the maximum is 640 MiB/second (5 x 128 x 1). For more examples, see [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md#flexible-examples).
5151
* You can increase the throughput of a Flexible service level pool at any time. Decreases to throughput on Flexible service level capacity pools can only occur following a 24-hour cool-down period. The 24-hour cool-down period initiates after any change to the throughput of the Flexible service level capacity pool.
52-
* Cool access isn't currently supported with the Flexible service level.
5352
* Only single encryption is currently supported for Flexible service level capacity pools.
5453
* Volumes in Flexible service level capacity pools can't be moved to capacity pools of a different service level. Similarly, you can't move volumes from capacity pools with different service levels into a Flexible service level capacity pool.
5554
@@ -85,7 +84,7 @@ Creating a capacity pool enables you to create volumes within it.
8584
This option is only available for Flexible service level capacity pools. The minimum value is 128 MiB/second. Maximum throughput depends on the size of the capacity pool. For calculation details, see [Considerations](#considerations).
8685
8786
* **Enable cool access**
88-
This option specifies whether volumes in the capacity pool support cool access. For details about using this option, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md). Cool access isn't currently supported on Flexible service level.
87+
This option specifies whether volumes in the capacity pool support cool access. For details about using this option, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
8988
9089
* **QoS**
9190
Specify whether the capacity pool should use the **Manual** or **Auto** QoS type. See [Storage Hierarchy](azure-netapp-files-understand-storage-hierarchy.md) and [Performance Considerations](azure-netapp-files-performance-considerations.md) to understand the QoS types.
@@ -99,7 +98,7 @@ Creating a capacity pool enables you to create volumes within it.
9998
> [!IMPORTANT]
10099
> Azure NetApp Files double encryption at rest supports [Standard network features](azure-netapp-files-network-topologies.md#configurable-network-features), but not Basic network features. See [considerations](double-encryption-at-rest.md#considerations) for using Azure NetApp Files double encryption at rest.
101100
>
102-
> After the capacity pool is created, you can’t modify the setting (switching between `single` or `double`) for the encryption type.
101+
> After the capacity pool is created, you can’t modify the encryption type.
103102
104103
:::image type="content" source="./media/azure-netapp-files-set-up-capacity-pool/flexible-service.png" alt-text="Screenshot showing the New Capacity Pool window.":::
105104

0 commit comments

Comments
 (0)