Skip to content

Commit 968b81f

Browse files
committed
acrolinx
1 parent 5ad9ecf commit 968b81f

File tree

5 files changed

+18
-18
lines changed

5 files changed

+18
-18
lines changed

articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: azure-netapp-files
55
author: b-ahibbard
66
ms.service: azure-netapp-files
77
ms.topic: conceptual
8-
ms.date: 04/09/2023
8+
ms.date: 08/08/2024
99
ms.author: b-ahibbard
1010
---
1111
# Configure application volume groups for SAP HANA using REST API
@@ -87,7 +87,7 @@ In a create request, use the following URI format:
8787

8888
The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
8989

90-
The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
90+
The following table describes the request body parameters and group level properties required to create an SAP HANA application volume group.
9191

9292
| URI parameter | Description | Restrictions for SAP HANA |
9393
| ---- | ----- | ----- |
@@ -98,7 +98,7 @@ The following table describes the request body parameters and group level proper
9898
| `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` |
9999
| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> |
100100

101-
This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
101+
This table describes the request body parameters and volume properties for creating a volume in an SAP HANA application volume group.
102102

103103
| Volume-level request parameter | Description | Restrictions for SAP HANA |
104104
| ---- | ----- | ----- |
@@ -107,7 +107,7 @@ This table describes the request body parameters and volume properties for creat
107107
| **Volume properties** | **Description** | **SAP HANA Value Restrictions** |
108108
| `creationToken` | Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001` |
109109
| `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
110-
| `usageThreshhold` | Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
110+
| `usageThreshold` | Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
111111
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. |
112112
| `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> |
113113
| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The “data”, “log” and “shared” volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the “data-backup” and “log-backup” volumes, but it will be ignored during placement.</li></ul> |
@@ -145,7 +145,7 @@ In the following examples, selected placeholders are specified. You should repla
145145

146146
SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
147147

148-
1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token:
148+
1. Extract the subscription ID. This automates the extraction of the subscription ID and generates the authorization token:
149149
```bash
150150
subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
151151
echo "Subscription ID: $subId"

articles/azure-netapp-files/configure-application-volume-oracle-api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
1212
ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.topic: conceptual
15-
ms.date: 10/20/2023
15+
ms.date: 08/08/2024
1616
ms.author: anfdocs
1717
---
1818
# Configure application volume group for Oracle using REST API
@@ -61,7 +61,7 @@ The following tables describe the request body parameters and volume properties
6161
|---------|---------|---------|
6262
| `creationToken` | Export path name, typically same as the volume name. | `<sid>-ora-data1` |
6363
| `throughputMibps` | QoS throughput | You should set throughput based on volume type between 1 MiBps and 4500 MiBps. |
64-
| `usageThreshhold` | Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
64+
| `usageThreshold` | Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
6565
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for Oracle. Only the following rules values can be modified for Oracle. The rest *must* have their default values: <br><br> - `unixReadOnly`: should be false. <br><br> - `unixReadWrite`: should be true. <br><br> - `allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions. <br><br> - `hasRootAccess`: must be true to use root user for installation. <br><br> - `chownMode`: Specify `chown` mode. <br><br> - `Select nfsv41: or nfsv3:`: as true. It's recommended to use the same protocol version for all volumes. <br> <br> All other rule values _must_ be left defaulted. |
6666
| `volumeSpecName` | Specifies the type of volume for the application volume group being created | Oracle volumes must have a value that is one of the following: <br><br> - `ora-data1` <br> - `ora-data2` <br> - `ora-data3` <br> - `ora-data4` <br> - `ora-data5` <br> - `ora-data6` <br> - `ora-data7` <br> - `ora-data8` <br> - `ora-log` <br> - `ora-log-mirror` <br> - `ora-binary` <br> - `ora-backup` <br> |
6767
| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. This parameter is optional. If the region has zones available, then use of zones is always priority. | The `data`, `log` and `mirror-log`, `ora-binary` and `backup` volumes must each have a PPG specified, preferably a common PPG. |

articles/azure-netapp-files/includes/50-gib-volume.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: b-ahibbard
33
ms.service: azure-netapp-files
44
ms.topic: include
5-
ms.date: 07/23/2024
5+
ms.date: 08/08/2024
66
ms.author: anfdocs
77
ms.custom: include file
88

@@ -14,7 +14,7 @@ ms.custom: include file
1414

1515
The ability to set a volume quota between 50 and 100 GiB is currently in preview. You must register for the feature before you can create a 50 GiB volume.
1616

17-
1. Register the feature
17+
1. Register the feature:
1818

1919
```azurepowershell-interactive
2020
Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANF50GiBVolumeSize

articles/sap/workloads/hana-vm-operations-netapp.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ Azure NetApp Files provides native NFS shares that can be used for **/hana/share
2424

2525
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important considerations:
2626

27-
- * For volume and capacity pool limits, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md).
28-
- Azure NetAPp Files-based NFS shares and the virtual machines that mount those shares must be in the same Azure Virtual Network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region
27+
- For volume and capacity pool limits, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md).
28+
- Azure NetApp Files-based NFS shares and the virtual machines that mount those shares must be in the same Azure Virtual Network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region.
2929
- The selected virtual network must have a subnet, delegated to Azure NetApp Files. **For SAP workload, it is highly recommended to configure a /25 range for the subnet delegated to Azure NetApp Files.**
3030
- It's important to have the virtual machines deployed sufficient proximity to the Azure NetApp storage for lower latency as, for example, demanded by SAP HANA for redo log writes.
3131
- Azure NetApp Files meanwhile has functionality to deploy NFS volumes into specific Azure Availability Zones. Such a zonal proximity is going to be sufficient in the majority of cases to achieve a latency of less than 1 millisecond. The functionality is in public preview and described in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). This functionality isn't requiring any interactive process with Microsoft to achieve proximity between your VM and the NFS volumes you allocate.
@@ -92,12 +92,12 @@ As you design the infrastructure for SAP in Azure you should be aware of some mi
9292
| Data Volume Write | 250 MB/sec | 4 TB | 2 TB |
9393
| Data Volume Read | 400 MB/sec | 6.3 TB | 3.2 TB |
9494

95-
Since all three KPIs are demanded, the **/hana/data** volume needs to be sized toward the larger capacity to fulfill the minimum read requirements. if you're using manual QoS capacity pools, the size and throughput of the volumes can be defined independently. Since both capacity and throughput are taken from the same capacity pool, the pool‘s service level and size must be large enough to deliver the total performance (see example [here](../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type))
95+
Since all three KPIs are demanded, the **/hana/data** volume needs to be sized toward the larger capacity to fulfill the minimum read requirements. If you're using manual QoS capacity pools, the size and throughput of the volumes can be defined independently. Since both capacity and throughput are taken from the same capacity pool, the pool‘s service level and size must be large enough to deliver the total performance (see example [here](../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type))
9696

9797
For HANA systems, which aren't requiring high bandwidth, the Azure NetApp Files volume throughput can be lowered by either a smaller volume size or, using manual QoS, by adjusting the throughput directly. And in case a HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are defined for backup volumes. However the backup volume throughput is essential for a well performing environment. Log – and Data volume performance must be designed to the customer expectations.
9898

9999
> [!IMPORTANT]
100-
> Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a consumer in a single session. This has to do with the underlying architecture of the Azure NetApp Files offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article [Performance benchmark test results for Azure NetApp Files](../../azure-netapp-files/performance-benchmarks-linux.md) were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP where we measure throughput from a single VM against an NFS volume hosted on Azure NetApp Files.
100+
> Independent of the capacity you deploy on a single NFS volume, the throughput is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a consumer in a single session. This has to do with the underlying architecture of the Azure NetApp Files offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article [Performance benchmark test results for Azure NetApp Files](../../azure-netapp-files/performance-benchmarks-linux.md) were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP where we measure throughput from a single VM against an NFS volume hosted on Azure NetApp Files.
101101
102102
To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for **/hana/shared**, the recommended sizes would look like:
103103

@@ -173,11 +173,11 @@ The order of deployment would look like:
173173

174174
The proximity placement group configuration to use AVGs in an optimal way would look like:
175175

176-
![Azure NetApp Files application volume group and ppg architecture](media/hana-vm-operations-netapp/avg-ppg-architecture.png)
176+
![Diagram of Azure NetApp Files application volume group and proximity placement group architecture.](media/hana-vm-operations-netapp/avg-ppg-architecture.png)
177177

178178
The diagram shows that you're going to use an Azure proximity placement group for the DBMS layer. So, that it can get used together with AVGs. It's best to just include only the VMs that run the HANA instances in the proximity placement group. The proximity placement group is necessary, even if only one VM with a single HANA instance is used, for the AVG to identify the closest proximity of the Azure NetApp Files hardware. And to allocate the NFS volume on Azure NetApp Files as close as possible to the VM(s) that are using the NFS volumes.
179179

180-
This method generates the most optimal results as it relates to low latency. Not only by getting the NFS volumes and VMs as close together as possible. But considerations of placing the data and redo log volumes across different controllers on the NetApp backend are taken into account as well. Though, the disadvantage is that your VM deployment is pinned down to one datacenter. With that you're losing flexibilities in changing VM types and families. As a result, you should limit this method to the systems that absolutely require such low storage latency. For all other systems, you should attempt the deployment with a traditional zonal deployment of the VM and Azure NetApp Files. In most cases this is sufficient in terms of low latency. This also ensures a easy maintenance and administration of the VM and Azure NetApp Files.
180+
This method generates the most optimal results as it relates to low latency. Not only by getting the NFS volumes and VMs as close together as possible. But considerations of placing the data and redo log volumes across different controllers on the NetApp backend are taken into account as well. Though, the disadvantage is that your VM deployment is pinned down to one datacenter. With that you're losing flexibilities in changing VM types and families. As a result, you should limit this method to the systems that absolutely require such low storage latency. For all other systems, you should attempt the deployment with a traditional zonal deployment of the VM and Azure NetApp Files. In most cases this is sufficient in terms of low latency. This also ensures easy maintenance and administration of the VM and Azure NetApp Files.
181181

182182
## Availability
183183
ANF system updates and upgrades are applied without impacting the customer environment. The defined [SLA is 99.99%](https://azure.microsoft.com/support/legal/sla/netapp/).

articles/sap/workloads/planning-guide-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Remark about the units used throughout this article. The public cloud vendors mo
1919

2020
## Microsoft Azure Storage resiliency
2121

22-
Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs (Virtual Hard Disk) in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage node failure, is transparent. As a result of this redundancy, it's **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs (Serive Level Agreements) as other native Azure storage.
22+
Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs (Virtual Hard Disk) in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage node failure, is transparent. As a result of this redundancy, it's **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs (Service Level Agreements) as other native Azure storage.
2323

2424
There are several more redundancy methods, which are all described in the article [Azure Storage replication](../../storage/common/storage-redundancy.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json) that applies to some of the different storage types Azure has to offer.
2525

@@ -129,7 +129,7 @@ The capability matrix for SAP workload looks like:
129129
| Backup storage | Suitable | For short term storage of backups |
130130
| Shares/shared disk | Not available | Needs Azure Premium Files or third party |
131131
| Resiliency | LRS | No GRS or ZRS available for disks |
132-
| Latency | Low-to medium | - |
132+
| Latency | Low to medium | - |
133133
| IOPS SLA | Yes | - |
134134
| IOPS linear to capacity | semi linear in brackets | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) |
135135
| Maximum IOPS per disk | 20,000 [dependent on disk size](https://azure.microsoft.com/pricing/details/managed-disks/) | Also consider [VM limits](../../virtual-machines/sizes.md) |
@@ -269,7 +269,7 @@ For information about service levels, see [Service levels for Azure NetApp Files
269269
For optimal results, use [Application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md) to deploy the volumes. Application volume group places volumes in optimal locations in the Azure infrastructure using affinity and anti-affinity rules to reduce contention and to allow for the best throughput and lowest latency.
270270

271271
> [!NOTE]
272-
> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes. For sizing information, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md). For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
272+
> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes. For sizing information, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md). For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/).
273273
274274
Azure NetApp Files is supported for several SAP workload scenarios:
275275

0 commit comments

Comments
 (0)