You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sap/workloads/get-started.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.service: sap-on-azure
6
6
author: msjuergent
7
7
manager: bburns
8
8
ms.topic: article
9
-
ms.date: 04/01/2024
9
+
ms.date: 06/28/2024
10
10
ms.author: juergent
11
11
---
12
12
@@ -55,6 +55,8 @@ In the SAP workload documentation space, you can find the following areas:
55
55
56
56
## Change Log
57
57
58
+
- June 26, 2024: Adapt [Azure Storage types for SAP workload](./planning-guide-storage.md) to latest features, like snapshot capabilities for Premium SSD v2 and Ultra disk. Adapt ANF to support of mix of NFS and block storage between /hana/data and /hana/log
59
+
- June 26, 2024: Fix wrong memory stated for some VMs in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md) and [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
58
60
- May 21, 2024: Update timeouts and added start delay for pacemaker scheduled events in [Set up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) and [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](./high-availability-guide-suse-pacemaker.md).
59
61
- April 1, 2024: Reference the considerations section for sizing HANA shared file system in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md), [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [Azure Files NFS for SAP](planning-guide-storage-azure-files.md)
60
62
- March 18, 2024: Added considerations for sizing the HANA shared file system in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup>| Not supported |
67
-
| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended<sup>2</sup>| Not supported |
68
-
| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup>| Not supported |
69
-
| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended | Recommended<sup>2</sup>| Not supported |
70
-
| HANA shared volume | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended<sup>3</sup>|
66
+
| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Not supported |
67
+
| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended | Not supported |
68
+
| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Not supported |
69
+
| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended | Recommended | Not supported |
70
+
| HANA shared volume | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended |
71
71
| DBMS Data volume non-HANA | Not supported | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
72
72
| DBMS log volume non-HANA M/Mv2 VM families | Not supported | Restricted suitable (non-prod) | Recommended<sup>1</sup> | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
73
73
| DBMS log volume non-HANA non-M/Mv2 VM families | Not supported | restricted suitable (non-prod) | Suitable for up to medium workload | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
74
74
75
75
76
76
<sup>1</sup> With usage of [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
77
77
78
-
<sup>2</sup> Using Azure NetApp Files requires /hana/data and /hana/log to be on Azure NetApp Files
79
-
80
-
<sup>3</sup> So far tested on SLES only
81
78
82
79
Characteristics you can expect from the different storage types list like:
83
80
@@ -87,7 +84,7 @@ Characteristics you can expect from the different storage types list like:
87
84
| Latency Reads | High | Medium to high | Low | submillisecond | submillisecond | submillisecond | low |
88
85
| Latency Writes | High | Medium to high | Low (submillisecond<sup>1</sup>) | submillisecond | submillisecond | submillisecond | low |
89
86
| HANA supported | No | No | yes<sup>1</sup> | Yes | Yes | Yes | No |
90
-
| Disk snapshots possible | Yes | Yes | Yes |No| No | Yes | No |
87
+
| Disk snapshots possible | Yes | Yes | Yes |Yes<sup>3</sup>| No<sup>2</sup>| Yes | No |
91
88
| Allocation of disks on different storage clusters when using availability sets | Through managed disks | Through managed disks | Through managed disks | Disk type not supported with VMs deployed through availability sets | Disk type not supported with VMs deployed through availability sets | No<sup>3</sup> | No |
92
89
| Aligned with Availability Zones | Yes | Yes | Yes | Yes | Yes | In public preview | No |
93
90
| Synchronous Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | No | Yes |
@@ -97,9 +94,9 @@ Characteristics you can expect from the different storage types list like:
97
94
98
95
<sup>1</sup> With usage of [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
99
96
100
-
<sup>2</sup> Costs depend on provisioned IOPS and throughput
97
+
<sup>2</sup> Creation of different Azure NetApp Files capacity pools doesn't guarantee deployment of capacity pools onto different storage units
101
98
102
-
<sup>3</sup> Creation of different Azure NetApp Files capacity pools doesn't guarantee deployment of capacity pools onto different storage units
99
+
<sup>3</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
103
100
104
101
105
102
> [!IMPORTANT]
@@ -202,10 +199,12 @@ The capability matrix for SAP workload looks like:
202
199
| HANA certified | Yes | - |
203
200
| Azure Write Accelerator support | No | - |
204
201
| Disk bursting | No | - |
205
-
| Disk snapshots possible |No| - |
206
-
| Azure Backup VM snapshots possible |No| - |
202
+
| Disk snapshots possible |Yes<sup>1</sup>| - |
203
+
| Azure Backup VM snapshots possible |Yes| - |
207
204
| Costs | Medium | - |
208
205
206
+
<sup>1</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
207
+
209
208
In opposite to Azure premium storage, Azure Premium SSD v2 fulfills SAP HANA storage latency KPIs. As a result, you **DON'T need to use Azure Write Accelerator caching** as described in the article [Enable Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md).
210
209
211
210
**Summary:** Azure Premium SSD v2 is the block storage that fits the best price/performance ratio for SAP workloads. Azure Premium SSD v2 is suited to handle database workloads. The submillisecond latency is ideal storage for demanding DBMS workloads. Though it's a newer storage type that got released in November 2022. Therefore, there still might be some limitations that are going to go away over the next few months.
@@ -242,13 +241,14 @@ The capability matrix for SAP workload looks like:
242
241
| Throughput linear to capacity | Semi linear in brackets |[Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)|
243
242
| HANA certified | Yes | - |
244
243
| Azure Write Accelerator support | No | - |
245
-
| Disk bursting |No| - |
246
-
| Disk snapshots possible |No| - |
247
-
| Azure Backup VM snapshots possible |No| - |
244
+
| Disk bursting |Yes| - |
245
+
| Disk snapshots possible |Yes<sup>1</sup>| - |
246
+
| Azure Backup VM snapshots possible |Yes| - |
248
247
| Costs | Higher than Premium storage | - |
249
248
249
+
<sup>1</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
250
250
251
-
**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
251
+
**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). In opposite to all other storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
252
252
253
253
## Azure NetApp Files
254
254
@@ -273,7 +273,7 @@ Azure NetApp Files is currently supported for several SAP workload scenarios:
273
273
- [High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for SAP applications](./high-availability-guide-windows-netapp-files-smb.md)
274
274
- [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md)
275
275
- [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
276
-
- SAP HANA deployments using NFS v4.1 shares for /hana/data and /hana/log volumes and/or NFS v4.1 or NFS v3 volumes for /hana/shared volumes as documented in the article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
276
+
- SAP HANA deployments using NFS v4.1 shares for /hana/data and /hana/log volumes and/or NFS v4.1 or NFS v3 volumes for /hana/shared volumes as documented in the article [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
277
277
- IBM Db2 in Suse or Red Hat Linux guest OS
278
278
- Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
279
279
- SAP ASE in Suse or Red Hat Linux guest OS
@@ -328,7 +328,7 @@ Other built-in functionality of Azure NetApp Files storage:
328
328
> Specifically for database deployments you want to achieve low latencies for at least your redo logs. Especially for SAP HANA, SAP requires a latency of less than 1 millisecond for HANA redo log writes of smaller sizes. To get to such latencies, see the possibilities below.
329
329
330
330
> [!IMPORTANT]
331
-
> Even for non-DBMS usage, you should use the preview functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
331
+
> Even for non-DBMS usage, you should use the functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
332
332
333
333
334
334
- You go for the closest proximity between VM and NFS share that can be arranged by using [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Groups, besides allocating best proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA deployments are distributed across different controllers in the Azure NetApp Files backend clusters. Disadvantage of this method is that you need to go through a pinning process again. A process that will end restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs that have the NFS volumes mounted.
@@ -374,7 +374,7 @@ The capability matrix for SAP workload looks like:
374
374
| Throughput SLA | Yes | - |
375
375
| Throughput linear to capacity | strictly linear | - |
376
376
| HANA certified | No| - |
377
-
| Disk snapshots possible |No| - |
377
+
| Disk snapshots possible |Yes| - |
378
378
| Azure Backup VM snapshots possible | No | - |
379
379
| Costs | low | - |
380
380
@@ -467,7 +467,7 @@ Creating a stripe set out of multiple Azure disks into one larger volume allows
467
467
468
468
Some rules need to be followed on striping:
469
469
470
-
- No in-VM configured storage should be used since Azure storage keeps the data redundant already
470
+
- No in-VM configured storage redundancy should be used since Azure storage keeps the data disk redundant already at the Azure storage backend
471
471
- The disks the stripe set is applied to, need to be of the same size
472
472
- With Premium SSD v2 and Ultra disk, the capacity, provisioned IOPS and provisioned throughput needs to be the same
0 commit comments