Skip to content

Commit 63ae32c

Browse files
Merge pull request #299392 from b-ahibbard/5-6
5 6
2 parents fea05c2 + c40d2dd commit 63ae32c

7 files changed

+13
-14
lines changed

articles/azure-netapp-files/application-volume-group-disaster-recovery.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: Add volumes for an SAP HANA system as a DR system using Azure NetApp Files cross-region replication | Microsoft Docs
2+
title: Add volumes for an SAP HANA system as a DR system using Azure NetApp Files cross-region replication
33
description: Describes using an application volume group to add volumes for an SAP HANA system as a disaster recovery (DR) system.
44
services: azure-netapp-files
55
author: b-hchen
66
ms.service: azure-netapp-files
77
ms.topic: how-to
8-
ms.date: 04/22/2025
8+
ms.date: 05/06/2025
99
ms.author: anfdocs
1010
---
1111
# Add volumes for an SAP HANA system as a DR system using cross-region replication
@@ -26,7 +26,7 @@ The following diagram illustrates cross-region replication between the source an
2626
> When you use an HA deployment with HSR at the primary side, you can choose to replicate not only the primary HANA system as described in this section, but also the HANA secondary system using cross-region replication. To automatically adapt the naming convention, you select both the **HSR secondary** and **Disaster recovery destination** options in the Create a Volume Group screen. The prefix then changes to `DR2-`.
2727
2828
> [!IMPORTANT]
29-
> * Recovering the HANA database at the destination region requires that you use application-consistent storage snapshots for your HANA backup. You can create such snapshots by using data-protection solutions such as SnapCenter and the [Azure Application Consistent Snapshot tool](azacsnap-introduction.md) (AzAcSnap).
29+
> * Recovering the HANA database at the destination region requires that you use application-consistent storage snapshots for your HANA backup. You can create such snapshots by using data-protection solutions including [SnapCenter](https://docs.netapp.com/us-en/snapcenter/protect-azure/protect-applications-azure-netapp-files.html), [Azure Application Consistent Snapshot tool](azacsnap-introduction.md) (AzAcSnap), or other [validated partner solutions](../storage/solution-integration/validated-partners/backup-archive-disaster-recovery/partner-overview.md).
3030
> * You need to replicate at least the data volume and the log-backup volume.
3131
> * You can optionally replicate the data-backup volume and the shared volume.
3232
> * You should *never* replicate the log volume. The application volume group will create the log volume as a standard volume.

articles/azure-netapp-files/application-volume-group-manage-volumes-oracle.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
11
---
2-
title: Manage volumes in Azure NetApp Files application volume group for Oracle | Microsoft Docs
2+
title: Manage volumes in Azure NetApp Files application volume group for Oracle
33
description: Describes how to manage a volume from its application volume group for Oracle, including resizing, deleting, or changing throughput for the volume.
44
services: azure-netapp-files
5-
documentationcenter: ''
65
author: b-hchen
76
ms.service: azure-netapp-files
87
ms.workload: storage
98
ms.topic: how-to
10-
ms.date: 04/17/2025
9+
ms.date: 05/06/2025
1110
ms.author: anfdocs
1211
---
1312
# Manage volumes in an application volume group for Oracle
@@ -34,7 +33,7 @@ You can manage a volume from its volume group. You can resize, delete, or change
3433
> Changing the protocol type involves reconfiguration at the Linux host. When using dNFS, it's not recommended to mix volumes using NFSv3 and NFSv4.1.
3534
3635
> [!NOTE]
37-
> Using Azure NetApp Files built-in automated snapshots doesn't create database consistent backups. Instead, use data protection software such as [SnapCenter](https://docs.netapp.com/us-en/snapcenter/protect-azure/protect-applications-azure-netapp-files.html) and [AzAcSnap](azacsnap-introduction.md) that supports snapshot-based data protection for Oracle.
36+
> Using Azure NetApp Files built-in automated snapshots doesn't create database consistent backups. Instead, use data protection software such as [SnapCenter](https://docs.netapp.com/us-en/snapcenter/protect-azure/protect-applications-azure-netapp-files.html), [AzAcSnap](azacsnap-introduction.md), or other [validated partner solutions](../storage/solution-integration/validated-partners/backup-archive-disaster-recovery/partner-overview.md) that supports snapshot-based data protection for Oracle.
3837
3938
* **Change Throughput**
4039
You can adapt the throughput of the volume.

articles/azure-netapp-files/application-volume-group-manage-volumes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Manage volumes in Azure NetApp Files application volume group | Microsoft Docs
2+
title: Manage volumes in Azure NetApp Files application volume group
33
description: Describes how to manage a volume from its application volume group, including resizing, deleting, or changing throughput for the volume.
44
services: azure-netapp-files
55
author: b-hchen

articles/azure-netapp-files/azure-netapp-files-resource-limits.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Resource limits for Azure NetApp Files | Microsoft Docs
2+
title: Resource limits for Azure NetApp Files
33
description: Describes limits for Azure NetApp Files resources and how to request resource limit increase.
44
services: azure-netapp-files
55
author: b-hchen
@@ -28,7 +28,7 @@ The following table describes resource limits for Azure NetApp Files:
2828
| Minimum size of a single capacity pool | 1 TiB* | No |
2929
| Maximum size of a single capacity pool | 2,048 TiB | No |
3030
| Minimum throughput of a Flexible service level capacity pool | 128 MiB/second | No |
31-
| Maximum throughput of a Flexible service level capacity pool | [5 x 128 x Size of capacity pool in TiB](azure-netapp-files-set-up-capacity-pool.md#considerations) | No |
31+
| Maximum throughput of a Flexible service level capacity pool | [5 x 128 MiB/second/TiB x Size of capacity pool in TiB](azure-netapp-files-set-up-capacity-pool.md#considerations) | No |
3232
| Minimum size of a single regular volume | 50 GiB | No |
3333
| Maximum size of a single regular volume | 100 TiB | No |
3434
| Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No |

articles/azure-netapp-files/azure-netapp-files-service-levels.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Service levels for Azure NetApp Files | Microsoft Docs
2+
title: Service levels for Azure NetApp Files
33
description: Describes throughput performance for the service levels of Azure NetApp Files.
44
services: azure-netapp-files
55
author: b-hchen
@@ -27,7 +27,7 @@ Azure NetApp Files supports four service levels: *Standard*, *Premium*, *Ultra*,
2727

2828
* <a name="Flexible"></a>Flexible storage (preview):
2929

30-
The Flexible service level enables you to adjust throughput and size limits independently. This service level is designed for demanding applications such as Oracle or SAP HANA. You can also use the Flexible service level to create high-capacity volumes with (relatively) low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The minimum throughput to be assigned to a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The first 128 MiB/s of throughput, known as the baseline, is included in the Flexible service level. The maximum throughput is 5 x 128 x the size of the capacity pool in TiB. For more information see [Flexible service level throughput examples](#flexible-examples). You can assign throughput and capacity to volumes that are part of a Flexible capacity pool in the same way you do volumes that are part of a manual QoS capacity pool of any service level. Cool access isn't currently supported with the Flexible service level.
30+
The Flexible service level enables you to adjust throughput and size limits independently. This service level is designed for demanding applications such as Oracle or SAP HANA. You can also use the Flexible service level to create high-capacity volumes with (relatively) low throughput requirements or the reverse: low-capacity volumes with high throughput requirements. The minimum throughput to be assigned to a Flexible capacity pool is 128 MiB/second regardless of the pool quota. The first 128 MiB/s of throughput, known as the baseline, is included in the Flexible service level. The maximum throughput is 5 x 128 MiB/second/TiB x the size of the capacity pool in TiB. For more information see [Flexible service level throughput examples](#flexible-examples). You can assign throughput and capacity to volumes that are part of a Flexible capacity pool in the same way you do volumes that are part of a manual QoS capacity pool of any service level. Cool access isn't currently supported with the Flexible service level.
3131

3232
>[!IMPORTANT]
3333
>The Flexible service level is only supported for new _manual QoS_ capacity pools.

articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Creating a capacity pool enables you to create volumes within it.
3838
* The Flexible service level is only available for manual QoS capacity pools.
3939
* The Flexible service level is only available on newly created capacity pools. You can't convert an existing capacity pool to use the Flexible service level.
4040
* Flexible service level capacity pools can't be converted to the Standard, Premium, or Ultra service level.
41-
* The minimum throughput for Flexible service level capacity pools is 128 MiB/second. Maximum throughput is calculated based on the size of the capacity pool using the formula 5 x 128 x capacity pool size in TiB. If your capacity pool is 1 TiB, the maximum is 640 MiB/second (5 x 128 x 1). For more examples, see [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md#flexible-examples).
41+
* The minimum throughput for Flexible service level capacity pools is 128 MiB/second. Maximum throughput is calculated based on the size of the capacity pool using the formula 5 x 128 MiB/second/TiB x capacity pool size in TiB. If your capacity pool is 1 TiB, the maximum is 640 MiB/second (5 x 128 x 1). For more examples, see [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md#flexible-examples).
4242
* You can increase the throughput of a Flexible service level pool at any time. Decreases to throughput on Flexible service level capacity pools can only occur following a 24-hour cool-down period. The 24-hour cool-down period initiates after any change to the throughput of the Flexible service level capacity pool.
4343
* Cool access isn't currently supported with the Flexible service level.
4444
* Only single encryption is currently supported for Flexible service level capacity pools.

articles/storage/files/storage-files-netapp-comparison.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Most workloads that require cloud file storage work well on either Azure Files o
4040
| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum (SMB only - NFS requires Premium shares).</li></ul> | All tiers<br><ul><li>50 GiB (Minimum capacity pool size: 1 TiB)</li></ul> |
4141
| Maximum Share/Volume Size | 100 TiB | All tiers<br><ul><li>100 TiB (regular volume)</li><li>2 PiB (large volume)</li><li>2,048 TiB capacity pool size limit</li></ul><br>Up to 12.5 PiB per Azure NetApp account |
4242
| Maximum Share/Volume IOPS | Premium<br><ul><li>Up to 100k</li></ul><br>Standard<br><ul><li>Up to 20k</li></ul> | Ultra, Premium, and Flexible<br><ul><li>Up to 450k </li></ul><br>Standard<br><ul><li>Up to 320k</li></ul> |
43-
| Maximum Share/Volume Throughput | Premium<br><ul><li>Up to 10 GiB/s</li></ul><br>Standard<br><ul><li>Up to [storage account limits](./storage-files-scale-targets.md#storage-account-scale-targets).</li></ul> | Ultra<br><ul><li>4.5 GiB/s (regular volume)</li><li>12.5 GiB/s (large volume)</li></ul><br>Premium<br><ul><li>Up to 4.5 GiB/s (regular volume)</li><li>12.5 GiB/s (large volume)</li></ul><br>Standard<br><ul><li>Up to 1.6 GiB/s (regular volume)</li>12.5 GiB/s (large volume)<li></li></ul><br>Flexible<ul><li>[5 x 128 x Size of capacity pool in TiB](../../azure-netapp-files/azure-netapp-files-service-levels.md#flexible-service-level-throughput-examples)</li></ul> |
43+
| Maximum Share/Volume Throughput | Premium<br><ul><li>Up to 10 GiB/s</li></ul><br>Standard<br><ul><li>Up to [storage account limits](./storage-files-scale-targets.md#storage-account-scale-targets).</li></ul> | Ultra<br><ul><li>4.5 GiB/s (regular volume)</li><li>12.5 GiB/s (large volume)</li></ul><br>Premium<br><ul><li>Up to 4.5 GiB/s (regular volume)</li><li>12.5 GiB/s (large volume)</li></ul><br>Standard<br><ul><li>Up to 1.6 GiB/s (regular volume)</li>12.5 GiB/s (large volume)<li></li></ul><br>Flexible<ul><li>[5 x 128 MiB/second/TiB x Size of capacity pool in TiB](../../azure-netapp-files/azure-netapp-files-service-levels.md#flexible-service-level-throughput-examples)</li></ul> |
4444
| Maximum File Size | 4 TiB | 16 TiB |
4545
| Maximum IOPS Per File | Premium<br><ul><li>Up to 8,000</li></ul><br>Standard<br><ul><li>1,000</li></ul> | All tiers<br><ul><li>Up to volume limit</li></ul> |
4646
| Maximum Throughput Per File | Premium<br><ul><li>300 MiB/s (Up to 1 GiB/s with SMB multichannel)</li></ul><br>Standard<br><ul><li>60 MiB/s</li></ul> | All tiers<br><ul><li>Up to volume limit</li></ul> |

0 commit comments

Comments
 (0)