You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/application-gateway/redirect-external-site-powershell.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ services: application-gateway
6
6
author: mbender-ms
7
7
ms.service: azure-application-gateway
8
8
ms.topic: how-to
9
-
ms.date: 09/24/2020
9
+
ms.date: 07/11/2025
10
10
ms.author: mbender
11
11
ms.custom: devx-track-azurepowershell
12
12
# Customer intent: As a cloud architect, I want to configure an application gateway for external web traffic redirection using PowerShell, so that I can efficiently manage and route traffic to external sites from my Azure environment.
@@ -28,7 +28,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
31
+
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version Az 5.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
32
32
33
33
## Create a resource group
34
34
@@ -61,7 +61,7 @@ $pip = New-AzPublicIpAddress `
61
61
62
62
## Create an application gateway
63
63
64
-
### Create the IP configurations and frontend port
64
+
### Create the frontend IP configurations and frontend port
65
65
66
66
Associate *myAGSubnet* that you previously created to the application gateway using [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration). Assign the public IP address to the application gateway using [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig). And then you can create the HTTP port using [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport).
Copy file name to clipboardExpand all lines: articles/application-gateway/tutorial-ingress-controller-add-on-new.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,8 +60,6 @@ az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-ma
60
60
> [!NOTE]
61
61
> Please ensure the identity used by AGIC has the proper permissions. A list of permissions needed by the identity can be found here: [Configure Infrastructure - Permissions](configuration-infrastructure.md#permissions). If a custom role is not defined with the required permissions, you may use the _Network Contributor_ role.
62
62
63
-
>[!NOTE]
64
-
>If you are planning on using AGIC with an AKS cluster using CNI Overlay, specify the parameter `--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/AppGatewayWithOverlayPreview` to configure AGIC to handle connectivity to the CNI Overlay enabled cluster.
65
63
66
64
```azurecli-interactive
67
65
# Get application gateway id from AKS addon profile
JSON Content Type Handling | [GA](./reference-dotnet-provider.md#json-content-type-handling) | GA | GA | GA | [GA](./reference-javascript-provider.md#json-content-type-handling) | GA
Key Vault References | [GA](./reference-dotnet-provider.md#key-vault-reference) | GA | GA | GA | [GA](./reference-javascript-provider.md#key-vault-reference) | GA
59
-
Key Vault Secret Refresh | [GA](./reference-dotnet-provider.md#key-vault-secret-refresh) | WIP | GA | WIP | WIP | GA
59
+
Key Vault Secret Refresh | [GA](./reference-dotnet-provider.md#key-vault-secret-refresh) | N/A | GA | WIP | WIP | GA
60
60
Custom Key Vault Secret Resolution | [GA](./reference-dotnet-provider.md#key-vault-reference) | GA | GA | GA | [GA](./reference-javascript-provider.md#key-vault-reference) | GA
Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | GA | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
63
-
Variant Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | GA | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
64
-
Feature Flag Telemetry | GA | GA | WIP | GA | GA | WIP
63
+
Variant Feature Flags | [GA](./reference-dotnet-provider.md#feature-flag) | Preview | GA | GA | [GA](./reference-javascript-provider.md#feature-flag) | GA
64
+
Feature Flag Telemetry | GA | Preview | WIP | GA | GA | WIP
65
65
Key Prefix Trim | [GA](./reference-dotnet-provider.md#trim-prefix-from-keys) | GA | GA | GA | [GA](./reference-javascript-provider.md#trim-prefix-from-keys) | GA
Replica Auto Discovery | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | WIP | [GA](./reference-javascript-provider.md#geo-replication) | WIP
68
-
Replica Failover | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | WIP | [GA](./reference-javascript-provider.md#geo-replication) | WIP
Replica Auto Discovery | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
68
+
Replica Failover | [GA](./reference-dotnet-provider.md#geo-replication) | GA | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
69
+
Replica Load Balancing | [GA](./reference-dotnet-provider.md#geo-replication) | WIP | GA | GA | [GA](./reference-javascript-provider.md#geo-replication) | WIP
70
70
Snapshots | [GA](./reference-dotnet-provider.md#snapshot) | GA | GA | WIP | [GA](./reference-javascript-provider.md#snapshot) | WIP
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-service-levels.md
+14-9Lines changed: 14 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,13 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: concept-article
8
-
ms.date: 03/25/2025
8
+
ms.date: 08/14/2025
9
9
ms.author: anfdocs
10
10
# Customer intent: "As a cloud storage administrator, I want to understand the throughput capabilities of different service levels in Azure NetApp Files, so that I can choose the right configuration to meet my application's performance requirements."
11
11
---
12
12
# Service levels for Azure NetApp Files
13
13
14
-
Service levels are an attribute of a capacity pool. Service levels are defined and differentiated by the allowed maximum throughput for a volume in the capacity pool based on the quota that is assigned to the volume. Throughput is a combination of read and write speed.
14
+
Service levels are an attribute of a capacity pool. Service levels are defined and differentiated by the allowed maximum throughput for a volume in the capacity pool based on the quota assigned to the volume. Throughput is a combination of read and write speed.
15
15
16
16
## Supported service levels
17
17
@@ -28,13 +28,18 @@ Azure NetApp Files supports four service levels: *Standard*, *Premium*, *Ultra*,
The Flexible service level enables you to adjust throughput and size limits independently. This service level is designed for demanding applications such as Oracle or SAP HANA. You can also use the Flexible service level to create high-capacity volumes with (relatively) low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The minimum throughput to be assigned to a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The first 128 MiB/s of throughput, known as the baseline, is included in the Flexible service level. The maximum throughput is 5 x 128 MiB/second/TiB x the size of the capacity pool in TiB. For more information see [Flexible service level throughput examples](#flexible-examples). You can assign throughput and capacity to volumes that are part of a Flexible capacity pool in the same way you do volumes that are part of a manual QoS capacity pool of any service level. Cool access isn't currently supported with the Flexible service level.
31
+
The Flexible service level enables you to adjust throughput and size limits independently. You can use the Flexible service level to create high-capacity volumes with low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The Flexible service level is designed for demanding applications such as Oracle or SAP HANA.
32
+
33
+
The minimum throughput you can assign a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The maximum throughput is 5 x 128 MiB/second/TiB x the size of the capacity pool in TiB. For more information, see [Flexible service level throughput examples](#flexible-examples) and [considerations for the Flexible service level](azure-netapp-files-set-up-capacity-pool.md#considerations).
32
34
33
35
>[!IMPORTANT]
34
36
>The Flexible service level is only supported for new _manual QoS_ capacity pools.
35
37
36
38
* Storage with cool access:
37
-
Cool access storage is available with the Standard, Premium, and Ultra service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. It may differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
39
+
[Cool access storage](manage-cool-access.md#register-the-feature) is available with the Standard, Premium, Ultra, and Flexible service levels. The throughput experience for any of these service levels with cool access is the same for cool access as it is for data in the hot tier. Throughput experiences differ when data that resides in the cool tier is accessed. For more information, see [Azure NetApp Files storage with cool access](cool-access-introduction.md) and [Performance considerations for storage with cool access](performance-considerations-cool-access.md).
40
+
41
+
>[!NOTE]
42
+
>Cool access pricing is calculated in the same manner for all service levels (Standard, Premium, Ultra, and Flexible).
38
43
39
44
## Throughput limits
40
45
@@ -51,11 +56,11 @@ The following diagram shows throughput limit examples of volumes in an auto QoS
51
56
52
57
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
53
58
54
-
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
59
+
* In Example 2, a volume from an auto QoS capacity pool at the Premium service level with 100 GiB of quota has an assigned throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
55
60
56
61
### Throughput limit examples of volumes in a manual QoS capacity pool
57
62
58
-
If you use a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. When you create a volume in a manual QoS capacity pool, you can specify the throughput (MiB/S) value. The total throughput assigned to volumes in a manual QoS capacity pool depends on the size of the pool and the service level. Throughput limits for the Standard, Premium, and Ultra service levels are capped by a formula: capacity pool size in TiB x service level throughput/TiB. For instance, a 10-TiB capacity pool with the Ultra service level has a total throughput capacity of 1,280 MiB/s (10 TiB x 128 MiB/s/TiB) available for the volumes. For the Flexible service level, the formula is 5 x capacity pool size in TiB x minimum service level throughput (128 MiB/s/TiB). For examples, see [Flexible service level throughput examples](#flexible-examples).
63
+
If you use a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. When you create a volume in a manual QoS capacity pool, you can specify the throughput (MiB/S) value. The total throughput assigned to volumes in a manual QoS capacity pool depends on the size of the pool and the service level. Throughput limits for the Standard, Premium, and Ultra service levels are capped by a formula: capacity pool size in TiB x service level throughput/TiB. For instance, a 10-TiB capacity pool with the Ultra service level has a total throughput capacity of 1,280 MiB/s (10 TiB x 128 MiB/s/TiB) available for the volumes. For the Flexible service level, the formula is 5 x capacity pool size in TiB x minimum service level throughput (128 MiB/s/TiB). For example, see [Flexible service level throughput examples](#flexible-examples).
59
64
60
65
For example, for an SAP HANA system, this capacity pool can be used to create the following volumes. Each volume provides the individual size and throughput to meet your application requirements:
61
66
@@ -79,12 +84,12 @@ The example extends to the Flexible service level as well. A Flexible service le
79
84
- SAP HANA shared volume: Size 1 TiB with up to 64 MiB/s
80
85
- SAP HANA backup volume: Size 4.5 TiB with up to 384 MiB/s
81
86
82
-
As illustrated in the diagram, the SAP HANA backup volume received the 128MiB/s additional baseline throughput.
87
+
As illustrated in the diagram, the SAP HANA backup volume receives baseline throughput of 128 MiB/s.
83
88
84
89
#### <aname="flexible-examples">Flexible service level throughput examples:</a>
85
90
86
91
| Flexible pool size (TiB) | Allowable throughput minimum (MiB/s) | Allowable throughput maximum (MiB/s) |
87
-
| - | - | -- |
92
+
|-|-|--|
88
93
| 1 | 128 | 5 * 128 * 1 = 640 |
89
94
| 2 | 128 | 5 * 128 * 2 = 1,280 |
90
95
| 10 | 128 | 5 * 128 * 10 = 6,400 |
@@ -93,7 +98,7 @@ As illustrated in the diagram, the SAP HANA backup volume received the 128MiB/s
93
98
| 1,024 | 128 | 5 * 128 * 1,024 = 655,360 |
94
99
95
100
>[!NOTE]
96
-
>A baseline throughput of 128 MiB/s is provided for every pool at no additional cost.
101
+
>Azure NetApp Files ensures a consistent baseline throughput regardless of capacity pool size. For example, both a 1-TiB capacity pool and a 10-TiB capacity pool receive the same complimentary baseline throughput of 128 MiB/s.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-hchen
6
6
ms.service: azure-netapp-files
7
7
ms.topic: how-to
8
-
ms.date: 05/14/2025
8
+
ms.date: 08/14/2025
9
9
ms.author: anfdocs
10
10
ms.custom:
11
11
- build-2025
@@ -25,7 +25,7 @@ Creating a capacity pool enables you to create volumes within it.
25
25
>[!IMPORTANT]
26
26
>To create a 1-TiB capacity pool with a tag, you must use API versions `2023-07-01_preview` to `2024-01-01_preview` or stable releases from `2024-01-01`.
27
27
* The Standard, Premium, and Ultra service levels are generally available (GA). No registration is required.
28
-
* The **Flexible** service level is currently in preview and supported in all Azure NetApp Files regions. You must register the feature before using it for the first time:
28
+
*<aname="flexible"></a> The **Flexible** service level is currently in preview and supported in all Azure NetApp Files regions. You must register the feature before using it for the first time:
29
29
30
30
1. Register the feature:
31
31
@@ -49,7 +49,6 @@ Creating a capacity pool enables you to create volumes within it.
49
49
* Flexible service level capacity pools can't be converted to the Standard, Premium, or Ultra service level.
50
50
* The minimum throughput for Flexible service level capacity pools is 128 MiB/second. Maximum throughput is calculated based on the size of the capacity pool using the formula 5 x 128 MiB/second/TiB x capacity pool size in TiB. If your capacity pool is 1 TiB, the maximum is 640 MiB/second (5 x 128 x 1). For more examples, see [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md#flexible-examples).
51
51
* You can increase the throughput of a Flexible service level pool at any time. Decreases to throughput on Flexible service level capacity pools can only occur following a 24-hour cool-down period. The 24-hour cool-down period initiates after any change to the throughput of the Flexible service level capacity pool.
52
-
* Cool access isn't currently supported with the Flexible service level.
53
52
* Only single encryption is currently supported for Flexible service level capacity pools.
54
53
* Volumes in Flexible service level capacity pools can't be moved to capacity pools of a different service level. Similarly, you can't move volumes from capacity pools with different service levels into a Flexible service level capacity pool.
55
54
@@ -85,7 +84,7 @@ Creating a capacity pool enables you to create volumes within it.
85
84
This option is only available for Flexible service level capacity pools. The minimum value is 128 MiB/second. Maximum throughput depends on the size of the capacity pool. For calculation details, see [Considerations](#considerations).
86
85
87
86
* **Enable cool access**
88
-
This option specifies whether volumes in the capacity pool support cool access. For details about using this option, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md). Cool access isn't currently supported on Flexible service level.
87
+
This option specifies whether volumes in the capacity pool support cool access. For details about using this option, see [Manage Azure NetApp Files storage with cool access](manage-cool-access.md).
89
88
90
89
* **QoS**
91
90
Specify whether the capacity pool should use the **Manual** or **Auto** QoS type. See [Storage Hierarchy](azure-netapp-files-understand-storage-hierarchy.md) and [Performance Considerations](azure-netapp-files-performance-considerations.md) to understand the QoS types.
@@ -99,7 +98,7 @@ Creating a capacity pool enables you to create volumes within it.
99
98
> [!IMPORTANT]
100
99
> Azure NetApp Files double encryption at rest supports [Standard network features](azure-netapp-files-network-topologies.md#configurable-network-features), but not Basic network features. See [considerations](double-encryption-at-rest.md#considerations) for using Azure NetApp Files double encryption at rest.
101
100
>
102
-
> After the capacity pool is created, you can’t modify the setting (switching between `single` or `double`) for the encryption type.
101
+
> After the capacity pool is created, you can’t modify the encryption type.
103
102
104
103
:::image type="content" source="./media/azure-netapp-files-set-up-capacity-pool/flexible-service.png" alt-text="Screenshot showing the New Capacity Pool window.":::
0 commit comments