Skip to content

Commit 267efe2

Browse files
Merge pull request #280150 from b-ahibbard/anf-sap
revise anf section
2 parents 627c16e + 65d1a42 commit 267efe2

File tree

1 file changed

+47
-43
lines changed

1 file changed

+47
-43
lines changed

articles/sap/workloads/planning-guide-storage.md

Lines changed: 47 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
77
ms.service: sap-on-azure
88
ms.subservice: sap-vm-workloads
99
ms.topic: article
10-
ms.date: 07/17/2024
10+
ms.date: 07/25/2024
1111
ms.author: juergent
1212
ms.custom: H1Hack27Feb2017
1313
---
@@ -252,34 +252,43 @@ The capability matrix for SAP workload looks like:
252252

253253
## Azure NetApp Files
254254

255-
[Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure native, first-party, enterprise-class, high-performance file storage service certified for use with SAP HANA. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises.
255+
[Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure native, first-party, enterprise-class, high-performance file storage service certified for use with SAP HANA. It provides _Volumes as a service_ for which you create NetApp accounts, capacity pools, and volumes. With Azure NetApp Files, you select service and performance levels and manage data protection to create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises.
256256

257+
The following types of SAP workload are supported on Azure NetApp Files volumes:
257258

258-
For information about service levels, see [Service levels for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). For the different types of SAP workload the following service levels are highly recommended:
259+
- SAP DBMS workload
260+
- SAPMNT share
261+
- Global transport directory
259262

260-
- SAP DBMS workload: Performance, ideally Ultra
261-
- SAPMNT share: Performance, ideally Ultra
262-
- Global transport directory: Performance, ideally Ultra
263+
Azure NetApp Files is available in three service levels, each with their own throughput and pricing specifications. Which one is right for your deployment depends on the size of the deployment. Customized sizing recommendations are available in the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc).
263264

264-
Sizing recommendations are available in the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc).
265+
For information about service levels, see [Service levels for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
266+
267+
### Deploying volumes
268+
269+
For optimal results, use [Application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md) to deploy the volumes. Application volume group places volumes in optimal locations in the Azure infrastructure using affinity and anti-affinity rules to reduce contention and to allow for the best throughput and lowest latency.
265270

266271
> [!NOTE]
267-
> The minimum provisioning size is a 1 TiB unit that is called capacity pool. You then create volumes out of this capacity pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity pool in 1 TiB see. For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
272+
> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes; the smallest volume size is 100 GiB. For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
268273
269-
Azure NetApp Files is currently supported for several SAP workload scenarios:
274+
Azure NetApp Files is supported for several SAP workload scenarios:
270275

276+
- SAP HANA deployments using NFS shares for /hana/data and /hana/log volumes for /hana/shared volumes as documented in the [SAP HANA Azure virtual machine storage configurations](hana-vm-operations-storage.md)
271277
- Providing SMB or NFS shares for SAP's global transport directory
272278
- The share sapmnt in high availability scenarios as documented in:
273279
- [High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for SAP applications](./high-availability-guide-windows-netapp-files-smb.md)
274280
- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md)
275281
- [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
276-
- SAP HANA deployments using NFS v4.1 shares for /hana/data and /hana/log volumes and/or NFS v4.1 or NFS v3 volumes for /hana/shared volumes as documented in the article [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
277-
- IBM Db2 in Suse or Red Hat Linux guest OS
278-
- Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
279-
- SAP ASE in Suse or Red Hat Linux guest OS
282+
- IBM Db2 in Suse or Red Hat Linux-based Azure VM
283+
- SAP on Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
284+
- SAP on ASE in Suse or Red Hat Linux guest OS
285+
- AP on MAXDB in Suse or Red Hat Linux guest OS
286+
- SAP on Microsoft SQL Server with SMB volumes
280287

281288
> [!NOTE]
282-
> So far no DBMS workloads are supported on SMB based on Azure NetApp Files.
289+
> For DBMS workloads on Linux, use NFS-based volumes on Azure NetApp Files.
290+
291+
### Decoupling throughput from volume size
283292

284293
Storage for database applications typically has throughput requirements that don't scale linearly with the size of the volumes, ie log volumes are relatively small in size but require high levels of throughput.
285294

@@ -290,55 +299,50 @@ Here's an example:
290299
- A volume for database files requires 500 MiB/s throughput and 39 TiB capacity
291300
- A volume for log files requires 2000 MiB/s throughput and 1 TiB capacity
292301

293-
You can create a manual QoS capacity pool for this scenario and allocate throughput independently of the volume sizes. The total capacity required is 40 TiB, and the total throughput is 2500 MiB/s. A capacity pool in the Premium service level (64 MiB/s per allocated TiB) accommodates both performance and capacity requirements (40 TiB * 64 TiB/s/TiB = 2560 TiB).
302+
You can create a manual QoS capacity pool for this scenario and allocate throughput independently of the volume sizes. The total capacity required is 40 TiB, and the total throughput budget is 2500 MiB/s. A capacity pool in the Premium service level (64 MiB/s per allocated TiB) accommodates both performance and capacity requirements (40 MiB * 64 iB/s/TiB = 2560 MiB).
294303

295-
Linear performance scaling would require considerable overprovisioning of the log volume to achieve the throughput requirement. To achieve the 2000 MiB/s throughput for the log volume, you'd need to deploy a capacity pool in the Ultra tier (128 MiB/s per allocated TiB) of 16 TiB, resulting in a wasted capacity of 15 TiB.
304+
Linear performance scaling would require considerable overprovisioning of the log volume to achieve the throughput requirement. To achieve the 2000 MiB/s throughput for the log volume, you'd need to deploy a capacity pool in the Ultra tier (128 MiB/s per allocated TiB) of 16 TiB, resulting in an overprovisioning and therefore a wasted capacity of 15 TiB.
296305

297306
Use the [Azure NetApp Files Performance Calculator](https://aka.ms/anfcalc) to get an estimate for your scenario.
298307

299-
The capability matrix for SAP workload looks like:
308+
The capability matrix for SAP workload on Azure NetApp Files looks like:
300309

301310
| Capability| Comment| Notes/Links |
302311
| --- | --- | --- |
303-
| OS base VHD | Doesn't work | - |
304-
| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on SLES/RHEL |
305-
| SAP global transport directory | Yes | SMB and NFS |
306-
| SAP sapmnt | Suitable | All systems SMB (Windows only) or NFS (Linux only) |
307-
| Backup storage | Suitable | - |
308-
| Shares/shared disk | Yes | SMB 3.0, NFS v3, and NFS v4.1 |
309-
| Resiliency | LRS and GRS | [GRS available](../../azure-netapp-files/cross-region-replication-introduction.md) |
312+
| OS base VHD | Use managed disk | - |
313+
| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on SLES/RHEL, MAXDB, SQL Server |
314+
| SAP global transport directory | Yes | SMB (Windows only) and NFS (Linux only) |
315+
| SAP sapmnt | Suitable |SMB (Windows only) or NFS (Linux only) |
316+
| Backup storage | Suitable | Use snapshots and/or Azure NetApp Files backup; log backup for HANA can also be used as file based backup destination |
317+
| Shares/shared disk | Yes | SMB, NFS |
318+
| Resiliency | LRS and GRS | [GRS with cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md); [ZRS with cross-zone replication](../../azure-netapp-files/cross-zone-replication-introduction.md) |
310319
| Latency | Very low | Typically less than 1 ms |
311320
| IOPS SLA | Yes | - |
312-
| IOPS linear to capacity | Linear with auto QoS; independent with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
313-
| Throughput SLA | Yes | Sizing recommendations are available in the SAP on Azure NetApp FIles TCO Estimator |
314-
| Throughput linear to capacity | Linear with auto QoS; independent with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
315-
| HANA certified | Yes | - |
316-
| Disk snapshots possible | Yes | - |
317-
| Azure Backup VM snapshots possible | No | Use [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md) or [SnapCenter](https://docs.netapp.com/us-en/snapcenter/concept/concept_snapcenter_overview.html) |
318-
| Costs | Competitive when including benefits of snapshots and integrated backup | - |
321+
| IOPS linear to capacity | Linear with auto QoS; independently configurable with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
322+
| Throughput SLA | Yes | Sizing recommendations are available in the SAP on the [Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc) |
323+
| Throughput linear to capacity | Linear with auto QoS; independently configurable with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
324+
| HANA certified | [Yes](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24&sort=Latest%20Certification&sortDesc=true) | - |
325+
| Disk snapshots possible | Yes | See [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md) |
326+
| Application consistent snapshot and backup orchestration | No | Use [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md) or [SnapCenter](https://docs.netapp.com/us-en/snapcenter/concept/concept_snapcenter_overview.html) |
327+
| Costs | Use TCO estimation tools | Use the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc) and enter the size of the landscape |
319328

320329
Other built-in functionality of Azure NetApp Files storage:
321330

322-
- Capability to perform snapshots of volume
323-
- Cloning of Azure NetApp Files volumes from snapshots
324-
- Restore volumes from snapshots (snap-revert)
325-
- [Application consistent Snapshot backup for SAP HANA and Oracle](../../azure-netapp-files/azacsnap-introduction.md)
331+
- Capability to perform application consistent [snapshots](../..//azure-netapp-files/snapshots-introduction.md) of volume using [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md)
332+
- Cloning of Azure NetApp Files [volumes from snapshots](../../azure-netapp-files/snapshots-restore-new-volume.md) for testing and development
333+
- Restoring [volumes from from snapshots (snap-revert)](../../azure-netapp-files/snapshots-revert-volume.md) for rapid restores from corruptions and errors
326334

327335
> [!IMPORTANT]
328336
> Specifically for database deployments you want to achieve low latencies for at least your redo logs. Especially for SAP HANA, SAP requires a latency of less than 1 millisecond for HANA redo log writes of smaller sizes. To get to such latencies, see the possibilities below.
329337
330338
> [!IMPORTANT]
331-
> Even for non-DBMS usage, you should use the functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
332-
333-
334-
- You go for the closest proximity between VM and NFS share that can be arranged by using [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Groups, besides allocating best proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA deployments are distributed across different controllers in the Azure NetApp Files backend clusters. Disadvantage of this method is that you need to go through a pinning process again. A process that ends restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs that have the NFS volumes mounted.
335-
- Current process of not using Availability Placement Groups. Which so far are available for SAP HANA only. This process also uses the same manual pinning process as this is the case with Availability Volume groups. This method is the method used for the last three years. It has the same flexibility restrictions as the process has with Availability Volume Groups.
336-
339+
> When deploying Azure NetApp Files volumes take note of the zone in which the virtual machines are or will be deployed. Ensure you select the same zone. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). Application volume group for SAP HANA uses the same functionality to deploy the volumes in the closest possible proximity to the application VMs.
337340
338-
As preferences for allocating NFS volumes based on Azure NetApp Files for database specific usage, you should attempt to allocate the NFS volume in the same zone as your VM first. Especially for non-HANA databases. Only if latency proves to be insufficient you should go through a manual pinning process. For smaller HANA workload or nonproduction HANA workload, you should follow a zonal allocation method as well. Only in cases where performance and latency aren't sufficient you should use Application Volume Groups.
341+
The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares in the same availability zone as the application VMs.
339342

343+
* Deploy Azure NetApp Files volumes for your SAP HANA deployment using [application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Group is that data volumes are deployed over multiple storage endpoints, reducing network contention and improving performance.
340344

341-
**Summary**: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB volumes or shares. The storage comes with three different service levels that provide different throughput and IOPS in a linear manner per GiB capacity of the volume. The Azure NetApp Files storage is enabling to deploy SAP HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed for /sapmnt or SAP global transport directory. Azure NetApp Files storage come with functionality availability that is available as native NetApp functionality.
345+
**Summary**: Azure NetApp Files is a certified low latency storage solution for SAP HANA. The service provides volumes carved out of one or more capacity pools. Capacity pools are available in three service levels which define the total capacity and throughput allocated. The volumes can be resized, and allocated throughput can be adjusted without service interruption to cater for changing requirements and to control cost. The service provides functionality to replicate volumes to other regions or zones for disaster recovery and business continuance purposes.
342346

343347
## Azure Premium Files
344348
[Azure Premium Files](../../storage/files/storage-files-planning.md) is a shared storage that offers SMB and NFS for a moderate price and sufficient latency to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal replication of the shares with an automatism that in case one replica fails, another replica in another zone can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium Files haven't been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure Premium Files is also suited for the usage for **/hana/shared**.

0 commit comments

Comments
 (0)