Skip to content

Commit 880c5c5

Browse files
authored
Merge pull request #281628 from b-ahibbard/50gib
50 gib minimum update + afec
2 parents a4ae97f + 968b81f commit 880c5c5

20 files changed

+85
-47
lines changed

articles/azure-netapp-files/azure-netapp-files-cost-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ For cost model specific to cross-region replication, see [Cost model for cross-r
1818

1919
Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly.
2020

21-
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
21+
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
2222

2323
### Pricing examples
2424

articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ This article shows you how to create an SMB3 volume. For NFS volumes, see [Creat
2020

2121
* You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md).
2222
* A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
23+
* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]
2324
* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it:
2425

2526
1. Register the feature:

articles/azure-netapp-files/azure-netapp-files-create-volumes.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ This article shows you how to create an NFS volume. For SMB volumes, see [Create
2323
* A subnet must be delegated to Azure NetApp Files.
2424
See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
2525

26+
* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]
27+
2628
## Considerations
2729

2830
* Deciding which NFS version to use

articles/azure-netapp-files/azure-netapp-files-introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Azure NetApp Files is designed to provide high-performance file storage for ente
3737
| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance.
3838
| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
3939
| Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
40-
| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
40+
| Small-to-large volumes | Easily resize file volumes from 50 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
4141
| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
4242
| 2,048-TiB maximum capacity pool | 2048-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes.
4343
| 50-1,024 TiB large volumes | Store large volumes of data up to 1,024 TiB in a single volume. | Manage large datasets and high-performance workloads with ease.

articles/azure-netapp-files/azure-netapp-files-resource-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The following table describes resource limits for Azure NetApp Files:
2727
| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
2828
| Minimum size of a single capacity pool | 1 TiB* | No |
2929
| Maximum size of a single capacity pool | 2,048 TiB | No |
30-
| Minimum size of a single regular volume | 100 GiB | No |
30+
| Minimum size of a single regular volume | 50 GiB | No |
3131
| Maximum size of a single regular volume | 100 TiB | No |
3232
| Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No |
3333
| Large volume size increase | 30% of lowest provisioned size | Yes |

articles/azure-netapp-files/azure-netapp-files-service-levels.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The following diagram shows throughput limit examples of volumes in an auto QoS
4242

4343
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
4444

45-
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota will be assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
45+
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
4646

4747
### Throughput limit examples of volumes in a manual QoS capacity pool
4848

articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ ms.author: anfdocs
1010
---
1111
# Storage hierarchy of Azure NetApp Files
1212

13-
Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
13+
Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
1414

1515
> [!IMPORTANT]
1616
> Azure NetApp Files currently doesn't support resource migration between subscriptions.
1717
1818
## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy
19-
The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
19+
The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
2020

2121
:::image type="content" source="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png" alt-text="Conceptual diagram of storage hierarchy." lightbox="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png":::
2222

@@ -73,11 +73,11 @@ When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
7373
- A volume's capacity consumption counts against its pool's provisioned capacity.
7474
- A volume’s throughput consumption counts against its pool’s available throughput. See [Manual QoS type](#manual-qos-type).
7575
- Each volume belongs to only one pool, but a pool can contain multiple volumes.
76-
- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB.
76+
- Volumes contain a capacity of between 50 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB.
7777

7878
## Large volumes
7979

80-
Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB.
80+
Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB.
8181

8282
For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md).
8383

articles/azure-netapp-files/backup-restore-new-volume.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Restoring a backup creates a new volume with the same protocol type. This articl
3636
> [!IMPORTANT]
3737
> Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores.
3838
39-
See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup.
39+
See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for information about minimums and maximums.
4040

4141
## Steps
4242

@@ -58,7 +58,7 @@ See [Requirements and considerations for Azure NetApp Files backup](backup-requi
5858
However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error:
5959
`Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>`
6060

61-
* The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered (minimum 100 GiB). Once the restore is complete, the volume can be resized depending on the size used.
61+
* The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered. Once the restore is complete, the volume can be resized depending on the size used.
6262

6363
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
6464

articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 04/09/2023
8+
ms.date: 08/08/2024
99
ms.author: b-ahibbard
1010
---
1111
# Configure application volume groups for SAP HANA using REST API
@@ -87,7 +87,7 @@ In a create request, use the following URI format:
8787

8888
The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
8989

90-
The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
90+
The following table describes the request body parameters and group level properties required to create an SAP HANA application volume group.
9191

9292
| URI parameter | Description | Restrictions for SAP HANA |
9393
| ---- | ----- | ----- |
@@ -98,7 +98,7 @@ The following table describes the request body parameters and group level proper
9898
| `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` |
9999
| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> |
100100

101-
This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
101+
This table describes the request body parameters and volume properties for creating a volume in an SAP HANA application volume group.
102102

103103
| Volume-level request parameter | Description | Restrictions for SAP HANA |
104104
| ---- | ----- | ----- |
@@ -107,7 +107,7 @@ This table describes the request body parameters and volume properties for creat
107107
| **Volume properties** | **Description** | **SAP HANA Value Restrictions** |
108108
| `creationToken` | Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001` |
109109
| `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
110-
| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
110+
| `usageThreshold` | Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
111111
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. |
112112
| `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> |
113113
| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The “data”, “log” and “shared” volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the “data-backup” and “log-backup” volumes, but it will be ignored during placement.</li></ul> |
@@ -145,7 +145,7 @@ In the following examples, selected placeholders are specified. You should repla
145145

146146
SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
147147

148-
1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token:
148+
1. Extract the subscription ID. This automates the extraction of the subscription ID and generates the authorization token:
149149
```bash
150150
subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
151151
echo "Subscription ID: $subId"

articles/azure-netapp-files/configure-application-volume-oracle-api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
1212
ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.topic: conceptual
15-
ms.date: 10/20/2023
15+
ms.date: 08/08/2024
1616
ms.author: anfdocs
1717
---
1818
# Configure application volume group for Oracle using REST API
@@ -61,7 +61,7 @@ The following tables describe the request body parameters and volume properties
6161
|---------|---------|---------|
6262
| `creationToken` | Export path name, typically same as the volume name. | `<sid>-ora-data1` |
6363
| `throughputMibps` | QoS throughput | You should set throughput based on volume type between 1 MiBps and 4500 MiBps. |
64-
| `usageThreshhold` | Size of the volume in bytes. This value must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
64+
| `usageThreshold` | Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
6565
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for Oracle. Only the following rules values can be modified for Oracle. The rest *must* have their default values: <br><br> - `unixReadOnly`: should be false. <br><br> - `unixReadWrite`: should be true. <br><br> - `allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions. <br><br> - `hasRootAccess`: must be true to use root user for installation. <br><br> - `chownMode`: Specify `chown` mode. <br><br> - `Select nfsv41: or nfsv3:`: as true. It's recommended to use the same protocol version for all volumes. <br> <br> All other rule values _must_ be left defaulted. |
6666
| `volumeSpecName` | Specifies the type of volume for the application volume group being created | Oracle volumes must have a value that is one of the following: <br><br> - `ora-data1` <br> - `ora-data2` <br> - `ora-data3` <br> - `ora-data4` <br> - `ora-data5` <br> - `ora-data6` <br> - `ora-data7` <br> - `ora-data8` <br> - `ora-log` <br> - `ora-log-mirror` <br> - `ora-binary` <br> - `ora-backup` <br> |
6767
| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. This parameter is optional. If the region has zones available, then use of zones is always priority. | The `data`, `log` and `mirror-log`, `ora-binary` and `backup` volumes must each have a PPG specified, preferably a common PPG. |

0 commit comments

Comments
 (0)