Skip to content

Commit 159eaf2

Browse files
Merge pull request #268337 from rdeltcheva/db2-xfs
Add xfs support remark
2 parents 1cd2d35 + c893242 commit 159eaf2

File tree

1 file changed

+26
-25
lines changed

1 file changed

+26
-25
lines changed

articles/sap/workloads/dbms-guide-ibm.md

Lines changed: 26 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ keywords: 'Azure, Db2, SAP, IBM'
77
ms.service: sap-on-azure
88
ms.subservice: sap-vm-workloads
99
ms.topic: article
10-
ms.date: 08/24/2022
10+
ms.date: 03/07/2024
1111
ms.author: juergent
1212
ms.custom: H1Hack27Feb2017
1313
---
@@ -19,7 +19,7 @@ General information about running SAP Business Suite on IBM Db2 for LUW is avail
1919

2020
For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note [2233094].
2121

22-
Various articles on SAP workload on Azure have been published. We recommend beginning with [Get started with SAP on Azure VMs](./get-started.md) and then read about other areas of interest.
22+
There are various articles for SAP workload on Azure. We recommend beginning with [Get started with SAP on Azure VMs](./get-started.md) and then read about other areas of interest.
2323

2424
The following SAP Notes are related to SAP on Azure regarding the area covered in this document:
2525

@@ -37,50 +37,51 @@ The following SAP Notes are related to SAP on Azure regarding the area covered i
3737
| [2002167] |Red Hat Enterprise Linux 7.x: Installation and Upgrade |
3838
| [1597355] |Swap-space recommendation for Linux |
3939

40-
As a pre-read to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) plus other guides in the [SAP workload on Azure documentation](./get-started.md).
40+
As a preread to this document, review [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md). Review other guides in the [SAP workload on Azure](./get-started.md).
4141

4242

4343
## IBM Db2 for Linux, UNIX, and Windows Version Support
4444
SAP on IBM Db2 for LUW on Microsoft Azure Virtual Machine Services is supported as of Db2 version 10.5.
4545

46-
For information about supported SAP products and Azure VM types, refer to SAP Note [1928533].
46+
For information about supported SAP products and Azure VM(Virtual Machines) types, refer to SAP Note [1928533].
4747

4848
## IBM Db2 for Linux, UNIX, and Windows Configuration Guidelines for SAP Installations in Azure VMs
4949
### Storage Configuration
5050
For an overview of Azure storage types for SAP workload, consult the article [Azure Storage types for SAP workload](./planning-guide-storage.md)
51-
All database files must be stored on mounted disks of Azure block storage (Windows: NTFS, Linux: xfs or ext3).
51+
All database files must be stored on mounted disks of Azure block storage (Windows: NTFS, Linux: xfs, [supported](https://www.ibm.com/support/pages/file-systems-recommended-db2-linux-unix-and-windows) as of Db2 11.1, or ext3).
52+
5253
Remote shared volumes like the Azure services in the listed scenarios are **NOT** supported for Db2 database files:
5354

54-
* [Microsoft Azure File Service](/archive/blogs/windowsazurestorage/introducing-microsoft-azure-file-service) for all guest OS
55+
* [Microsoft Azure File Service](/archive/blogs/windowsazurestorage/introducing-microsoft-azure-file-service) for all guest OS.
5556

5657
* [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) for Db2 running in Windows guest OS.
5758

5859
Remote shared volumes like the Azure services in the listed scenarios are supported for Db2 database files:
5960

6061
* Hosting Linux guest OS based Db2 data and log files on NFS shares hosted on Azure NetApp Files is supported!
6162

62-
Using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) apply to deployments with the Db2 DBMS as well.
63+
If you're using disks based on Azure Page BLOB Storage or Managed Disks, the statements made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) apply to deployments with the Db2 DBMS as well.
6364

6465
As explained earlier in the general part of the document, quotas on IOPS throughput for Azure disks exist. The exact quotas are depending on the VM type used. A list of VM types with their quotas can be found [here (Linux)](../../virtual-machines/sizes.md) and [here (Windows)](../../virtual-machines/sizes.md).
6566

66-
As long as the current IOPS quota per disk is sufficient, it is possible to store all the database files on one single mounted disk. Whereas you always should separate the data files and transaction log files on different disks/VHDs.
67+
As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single mounted disk. Whereas you always should separate the data files and transaction log files on different disks/VHDs.
6768

6869
For performance considerations, also refer to chapter 'Data Safety and Performance Considerations for Database Directories' in SAP installation guides.
6970

70-
Alternatively, you can use Windows Storage Pools (only available in Windows Server 2012 and higher) as described [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) or LVM or mdadm on Linux to create one large logical device over multiple disks.
71+
Alternatively, you can use Windows Storage Pools, which are only available in Windows Server 2012 and higher as described [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md). On Linux you can use LVM or mdadm to create one large logical device over multiple disks.
7172

7273
<!-- log_dir, sapdata and saptmp are terms in the SAP and DB2 world and now spelling errors -->
7374

74-
For Azure M-Series VM, the latency writing into the transaction logs can be reduced by factors, compared to Azure Premium storage performance, when using Azure Write Accelerator. Therefore, you should deploy Azure Write Accelerator for the VHD(s) that form the volume for the Db2 transaction logs. Details can be read in the document [Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md).
75+
For Azure M-Series VM, you can reduce by factors the latency writing into the transaction logs, compared to Azure Premium storage performance, when using Azure Write Accelerator. Therefore, you should deploy Azure Write Accelerator for one or more VHDs that form the volume for the Db2 transaction logs. Details can be read in the document [Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md).
7576

7677
IBM Db2 LUW 11.5 released support for 4-KB sector size. Though you need to enable the usage of 4-KB sector size with 11.5 by the configurations setting of db2set DB2_4K_DEVICE_SUPPORT=ON as documented in:
7778

7879
- [Db1 11.5 performance variable](https://www.ibm.com/docs/en/db2/11.5?topic=variables-performance)
7980
- [Db2 registry and environment variables](https://www.ibm.com/docs/en/db2/11.5?topic=variables-registry-environment)
8081

81-
For older Db2 versions, a 512-Byte sector size must be used. Premium SSD disks are 4-KB native and have 512-Byte emulation. Ultra disk uses 4-KB sector size by default. You can enable 512-Byte sector size during creation of Ultra disk. Details are available [Using Azure ultra disks](../../virtual-machines/disks-enable-ultra-ssd.md#deploy-an-ultra-disk---512-byte-sector-size). This 512-Byte sector size is a prerequisite for IBM Db2 LUW versions lower than 11.5.
82+
For older Db2 versions, a 512 Byte sector size must be used. Premium SSD disks are 4-KB native and have 512 Byte emulation. Ultra disk uses 4-KB sector size by default. You can enable 512 Byte sector size during creation of Ultra disk. Details are available [Using Azure ultra disks](../../virtual-machines/disks-enable-ultra-ssd.md#deploy-an-ultra-disk---512-byte-sector-size). This 512 Byte sector size is a prerequisite for IBM Db2 LUW versions lower than 11.5.
8283

83-
On Windows using Storage pools for Db2 storage paths for `log_dir`, `sapdata` and `saptmp` directories, you must specify a physical disk sector size of 512-Byte. When using Windows Storage Pools, you must create the storage pools manually via command line interface using the parameter `-LogicalSectorSizeDefault`. For more information, see [New-StoragePool](/powershell/module/storage/new-storagepool).
84+
On Windows using Storage pools for Db2 storage paths for `log_dir`, `sapdata` and `saptmp` directories, you must specify a physical disk sector size of 512 Bytes. When using Windows Storage Pools, you must create the storage pools manually via command line interface using the parameter `-LogicalSectorSizeDefault`. For more information, see [New-StoragePool](/powershell/module/storage/new-storagepool).
8485

8586
## Recommendation on VM and disk structure for IBM Db2 deployment
8687

@@ -129,7 +130,7 @@ Following is a baseline configuration for various sizes and uses of SAP on Db2 d
129130
| --- | --- | --- | :---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
130131
|M128s |/db2 |P10 |1 |500 |100 |128 |3,500 |170 || |
131132
|vCPU: 128 |/db2/```<SID>```/sapdata |P40 |4 |30,000 |1.000 |8.192 |30,000 |1.000 |256 KB |ReadOnly |
132-
|RAM: 2048 GiB |/db2/```<SID>```/saptmp |P20 |2 |4,600 |300 |1.024 |7,000 |340 |128 KB ||
133+
|RAM: 2,048 GiB |/db2/```<SID>```/saptmp |P20 |2 |4,600 |300 |1.024 |7,000 |340 |128 KB ||
133134
| |/db2/```<SID>```/log_dir |P30 |4 |20,000 |800 |4.096 |20,000 |800 |64<br />KB |Write-<br />Accelerator |
134135
| |/db2/```<SID>```/offline_log_dir |P30 |1 |5,000 |200 |1.024 |5,000 |200 || |
135136

@@ -144,12 +145,12 @@ The usage of NFS v4.1 volumes based on Azure NetApp Files (ANF) is supported wit
144145

145146
A fifth potential volume could be an ANF volume that you use for more long-term backups that you use to snapshot and store the snapshots in Azure Blob store.
146147

147-
The configuration could look like shown here
148+
The configuration could look like shown here:
148149

149150
![Example of Db2 configuration using ANF](./media/dbms_guide_ibm/anf-configuration-example.png)
150151

151152

152-
The performance tier and the size of the ANF hosted volumes must be chosen based on the performance requirements. However, we recommend taking the Ultra performance level for the data and the log volume. It is not supported to mix block storage and shared storage types for the data and log volume.
153+
The performance tier and the size of the ANF hosted volumes must be chosen based on the performance requirements. However, we recommend taking the Ultra performance level for the data and the log volume. It isn't supported to mix block storage and shared storage types for the data and log volume.
153154

154155
As of mount options, mounting those volumes could look like (you need to replace ```<SID>``` and ```<sid>``` by the SID of your SAP system):
155156

@@ -206,7 +207,7 @@ To increase the number of targets to write to, two options can be used/combined
206207
* Using more than one target directory to write the backup to
207208

208209
>[!NOTE]
209-
>Db2 on Windows does not support the Windows VSS technology. As a result, the application consistent VM backup of Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.
210+
>Db2 on Windows doesn't support the Windows VSS technology. As a result, the application consistent VM backup of Azure Backup Service can't be leveraged for VMs the Db2 DBMS is deployed in.
210211
211212
### High Availability and Disaster Recovery
212213

@@ -222,30 +223,30 @@ Db2 high availability disaster recovery (HADR) with pacemaker is supported. Both
222223

223224
#### Windows Cluster Server
224225

225-
Microsoft Cluster Server (MSCS) is not supported.
226+
Microsoft Cluster Server (MSCS) isn't supported.
226227

227-
Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration have working name resolution, the setup in Azure does not differ from any setup that is done on-premises. It is not recommended to rely on IP resolution only.
228+
Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration have working name resolution, the setup in Azure doesn't differ from any setup that is done on-premises. It isn't recommended to rely on IP resolution only.
228229

229-
Do not use Geo-Replication for the storage accounts that store the database disks. For more information, see the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md).
230+
Don't use Geo-Replication for the storage accounts that store the database disks. For more information, see the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md).
230231

231232
### Accelerated Networking
232-
For Db2 deployments on Windows, it is highly recommended to use the Azure functionality of Accelerated Networking as described in the document [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/). Also consider recommendations made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md).
233+
For Db2 deployments on Windows, we highly recommend using the Azure functionality of Accelerated Networking as described in the document [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/). Also consider recommendations made in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md).
233234

234235

235236
### Specifics for Linux deployments
236-
As long as the current IOPS quota per disk is sufficient, it is possible to store all the database files on one single disk. Whereas you always should separate the data files and transaction log files on different disks.
237+
As long as the current IOPS quota per disk is sufficient, it's possible to store all the database files on one single disk. Whereas you always should separate the data files and transaction log files on different disks.
237238

238-
If the IOPS or I/O throughput of a single Azure VHD is not sufficient, you can use LVM (Logical Volume Manager) or MDADM as described in the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) to create one large logical device over multiple disks.
239-
For the disks containing the Db2 storage paths for your sapdata and saptmp directories, you must specify a physical disk sector size of 512 KB.
239+
If the IOPS or I/O throughput of a single Azure VHD isn't sufficient, you can use LVM (Logical Volume Manager) or MDADM as described in the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) to create one large logical device over multiple disks.
240+
For the disks containing the Db2 storage paths for your `sapdata` and `saptmp` directories, you must specify a physical disk sector size of 512 KB.
240241

241242
<!-- sapdata and saptmp are terms in the SAP and DB2 world and now spelling errors -->
242243

243244

244245
### Other
245-
All other general areas like Azure Availability Sets or SAP monitoring apply as described in the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md) for deployments of VMs with the IBM Database as well.
246+
All other general areas like Azure Availability Sets or SAP monitoring apply for deployments of VMs with the IBM Database as well. These general areas we describe in [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md).
246247

247248
## Next steps
248-
Read the article
249+
Read the article:
249250

250251
- [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md)
251252

0 commit comments

Comments
 (0)