You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-cost-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ For cost model specific to cross-region replication, see [Cost model for cross-r
18
18
19
19
Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly.
20
20
21
-
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
21
+
Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity pool’s provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it:
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-introduction.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ Azure NetApp Files is designed to provide high-performance file storage for ente
37
37
| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance.
38
38
| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
39
39
| Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
40
-
| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
40
+
| Small-to-large volumes | Easily resize file volumes from 50 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
41
41
| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
42
42
| 2,048-TiB maximum capacity pool | 2048-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes.
43
43
| 50-1,024 TiB large volumes | Store large volumes of data up to 1,024 TiB in a single volume. | Manage large datasets and high-performance workloads with ease.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-resource-limits.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ The following table describes resource limits for Azure NetApp Files:
27
27
| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
28
28
| Minimum size of a single capacity pool | 1 TiB*| No |
29
29
| Maximum size of a single capacity pool | 2,048 TiB | No |
30
-
| Minimum size of a single regular volume |100 GiB | No |
30
+
| Minimum size of a single regular volume |50 GiB | No |
31
31
| Maximum size of a single regular volume | 100 TiB | No |
32
32
| Minimum size of a single [large volume](large-volumes-requirements-considerations.md)| 50 TiB | No |
33
33
| Large volume size increase | 30% of lowest provisioned size | Yes |
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-service-levels.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ The following diagram shows throughput limit examples of volumes in an auto QoS
42
42
43
43
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
44
44
45
-
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota will be assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
45
+
* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
46
46
47
47
### Throughput limit examples of volumes in a manual QoS capacity pool
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,13 +10,13 @@ ms.author: anfdocs
10
10
---
11
11
# Storage hierarchy of Azure NetApp Files
12
12
13
-
Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
13
+
Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
14
14
15
15
> [!IMPORTANT]
16
16
> Azure NetApp Files currently doesn't support resource migration between subscriptions.
17
17
18
18
## <aname="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy
19
-
The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
19
+
The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
20
20
21
21
:::image type="content" source="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png" alt-text="Conceptual diagram of storage hierarchy." lightbox="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png":::
22
22
@@ -73,11 +73,11 @@ When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
73
73
- A volume's capacity consumption counts against its pool's provisioned capacity.
74
74
- A volume’s throughput consumption counts against its pool’s available throughput. See [Manual QoS type](#manual-qos-type).
75
75
- Each volume belongs to only one pool, but a pool can contain multiple volumes.
76
-
- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB.
76
+
- Volumes contain a capacity of between 50 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB.
77
77
78
78
## Large volumes
79
79
80
-
Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB.
80
+
Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB.
81
81
82
82
For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md).
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/backup-restore-new-volume.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Restoring a backup creates a new volume with the same protocol type. This articl
36
36
> [!IMPORTANT]
37
37
> Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores.
38
38
39
-
See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup.
39
+
See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for information about minimums and maximums.
40
40
41
41
## Steps
42
42
@@ -58,7 +58,7 @@ See [Requirements and considerations for Azure NetApp Files backup](backup-requi
58
58
However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error:
59
59
`Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>`
60
60
61
-
* The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered (minimum 100 GiB). Once the restore is complete, the volume can be resized depending on the size used.
61
+
* The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered. Once the restore is complete, the volume can be resized depending on the size used.
62
62
63
63
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
5
5
author: b-ahibbard
6
6
ms.service: azure-netapp-files
7
7
ms.topic: conceptual
8
-
ms.date: 04/09/2023
8
+
ms.date: 08/08/2024
9
9
ms.author: b-ahibbard
10
10
---
11
11
# Configure application volume groups for SAP HANA using REST API
@@ -87,7 +87,7 @@ In a create request, use the following URI format:
87
87
88
88
The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
89
89
90
-
The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
90
+
The following table describes the request body parameters and group level properties required to create an SAP HANA application volume group.
91
91
92
92
| URI parameter | Description | Restrictions for SAP HANA |
93
93
| ---- | ----- | ----- |
@@ -98,7 +98,7 @@ The following table describes the request body parameters and group level proper
98
98
|`applicationIdentifier`| Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9`|
99
99
|`volumes`| Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> |
100
100
101
-
This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
101
+
This table describes the request body parameters and volume properties for creating a volume in an SAP HANA application volume group.
102
102
103
103
| Volume-level request parameter | Description | Restrictions for SAP HANA |
104
104
| ---- | ----- | ----- |
@@ -107,7 +107,7 @@ This table describes the request body parameters and volume properties for creat
107
107
|**Volume properties**|**Description**|**SAP HANA Value Restrictions**|
108
108
|`creationToken`| Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001`|
109
109
|`throughputMibps`| QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
110
-
|`usageThreshhold`| Size of the volume in bytes. This must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
110
+
|`usageThreshold`| Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
111
111
|`exportPolicyRule`| Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. |
112
112
|`volumeSpecName`| Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> |
113
113
|`proximityPlacementGroup`| Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The “data”, “log” and “shared” volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the “data-backup” and “log-backup” volumes, but it will be ignored during placement.</li></ul> |
@@ -145,7 +145,7 @@ In the following examples, selected placeholders are specified. You should repla
145
145
146
146
SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
147
147
148
-
1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token:
148
+
1. Extract the subscription ID. This automates the extraction of the subscription ID and generates the authorization token:
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/configure-application-volume-oracle-api.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
12
12
ms.workload: storage
13
13
ms.tgt_pltfrm: na
14
14
ms.topic: conceptual
15
-
ms.date: 10/20/2023
15
+
ms.date: 08/08/2024
16
16
ms.author: anfdocs
17
17
---
18
18
# Configure application volume group for Oracle using REST API
@@ -61,7 +61,7 @@ The following tables describe the request body parameters and volume properties
61
61
|---------|---------|---------|
62
62
|`creationToken`| Export path name, typically same as the volume name. |`<sid>-ora-data1`|
63
63
|`throughputMibps`| QoS throughput | You should set throughput based on volume type between 1 MiBps and 4500 MiBps. |
64
-
|`usageThreshhold`| Size of the volume in bytes. This value must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
64
+
|`usageThreshold`| Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
65
65
|`exportPolicyRule`| Volume export policy rule | At least one export policy rule must be specified for Oracle. Only the following rules values can be modified for Oracle. The rest *must* have their default values: <br><br> - `unixReadOnly`: should be false. <br><br> - `unixReadWrite`: should be true. <br><br> - `allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions. <br><br> - `hasRootAccess`: must be true to use root user for installation. <br><br> - `chownMode`: Specify `chown` mode. <br><br> - `Select nfsv41: or nfsv3:`: as true. It's recommended to use the same protocol version for all volumes. <br> <br> All other rule values _must_ be left defaulted. |
66
66
|`volumeSpecName`| Specifies the type of volume for the application volume group being created | Oracle volumes must have a value that is one of the following: <br><br> - `ora-data1` <br> - `ora-data2` <br> - `ora-data3` <br> - `ora-data4` <br> - `ora-data5` <br> - `ora-data6` <br> - `ora-data7` <br> - `ora-data8` <br> - `ora-log` <br> - `ora-log-mirror` <br> - `ora-binary` <br> - `ora-backup` <br> |
67
67
|`proximityPlacementGroup`| Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. This parameter is optional. If the region has zones available, then use of zones is always priority. | The `data`, `log` and `mirror-log`, `ora-binary` and `backup` volumes must each have a PPG specified, preferably a common PPG. |
0 commit comments