Skip to content

Commit c370946

Browse files
authored
Merge pull request #186589 from Phil-Jensen/master
Update SAP HANA support matrix
2 parents 51af1b8 + 7574641 commit c370946

File tree

1 file changed

+36
-46
lines changed

1 file changed

+36
-46
lines changed

articles/azure-netapp-files/azacsnap-get-started.md

Lines changed: 36 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -22,18 +22,17 @@ This article provides a guide for installing the Azure Application Consistent Sn
2222

2323
## Getting the snapshot tools
2424

25-
It is recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapdownload) from Microsoft.
25+
It's recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapinstaller) from Microsoft.
2626

27-
The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature) which is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
27+
The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature). This file is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
2828

2929
Once these downloads are completed, then follow the steps in this guide to install.
3030

3131
### Verifying the download
3232

33-
The installer, which is downloadable per above, has an associated PGP signature file with an `.asc`
34-
filename extension. This file can be used to ensure the installer downloaded is a verified
35-
Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here
36-
(<https://packages.microsoft.com/keys/microsoft.asc>) and has been used to sign the signature file.
33+
The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
34+
Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here (<https://packages.microsoft.com/keys/microsoft.asc>)
35+
and has been used to sign the signature file.
3736

3837
The Microsoft PGP Public Key can be imported to a user's local as follows:
3938

@@ -46,7 +45,7 @@ The following commands trust the Microsoft PGP Public Key:
4645

4746
1. List the keys in the store.
4847
2. Edit the Microsoft key.
49-
3. Check the fingerprint with `fpr`
48+
3. Check the fingerprint with `fpr`.
5049
4. Sign the key to trust it.
5150

5251
```bash
@@ -107,7 +106,7 @@ gpg: Good signature from "Microsoft (Release signing)
107106
108107
```
109108

110-
For more details about using GPG, see [The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual/book1.html).
109+
For more information about using GPG, see [The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual/book1.html).
111110

112111
## Supported scenarios
113112

@@ -131,53 +130,44 @@ See [Supported scenarios for HANA Large Instances](../virtual-machines/workloads
131130
The following matrix is provided as a guideline on which versions of SAP HANA
132131
are supported by SAP for Storage Snapshot Backups.
133132

134-
| Database Versions |1.0 SPS12|2.0 SPS0|2.0 SPS1|2.0 SPS2|2.0 SPS3|2.0 SPS4|
135-
|-------------------------|---------|--------|--------|--------|--------|--------|
136-
|Single Container Database||| - | - | - | - |
137-
|MDC Single Tenant | - | - |||||
138-
|MDC Multiple Tenants | - | - | - | - | - ||
139-
> √ = <small>supported by SAP for Storage Snapshots</small>
133+
134+
| Database type | Minimum database versions | Notes |
135+
|---------------------------|---------------------------|-----------------------------------------------------------------------------------------|
136+
| Single Container Database | 1.0 SPS 12, 2.0 SPS 00 | |
137+
| MDC Single Tenant | 2.0 SPS 01 | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
138+
| MDC Multiple Tenants | 2.0 SPS 04 | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
139+
> \* SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02
140+
140141

141142
## Important things to remember
142143

143144
- After the setup of the snapshot tools, continuously monitor the storage space available and if
144145
necessary, delete the old snapshots on a regular basis to avoid storage fill out.
145146
- Always use the latest snapshot tools.
146147
- Use the same version of the snapshot tools across the landscape.
147-
- Test the snapshot tools and get comfortable with the parameters required and output of the
148-
command before using in the production system.
149-
- When setting up the HANA user for backup (details below in this document), you need to
150-
set up the user for each HANA instance. Create an SAP HANA user account to access HANA
151-
instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container
152-
environment, it can be set up under the tenant database.
153-
- Customers must provide the SSH public key for storage access. This action must be done once per
154-
node and for each user under which the command is executed.
148+
- Test the snapshot tools to understand the parameters required and their behavior, along with the log files, before deployment into production.
149+
- When setting up the HANA user for backup, you need to set up the user for each HANA instance. Create an SAP HANA user account to access HANA
150+
instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container environment, it can be set up under the tenant database.
151+
- Customers must provide the SSH public key for storage access. This action must be done once per node and for each user under which the command is executed.
155152
- The number of snapshots per volume is limited to 250.
156-
- If manually editing the configuration file, always use a Linux text editor such as "vi" and not
157-
Windows editors like Notepad. Using Windows editor may corrupt the file format.
158-
- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
153+
- If manually editing the configuration file, always use a Linux text editor such as "vi" and not Windows editors like Notepad. Using Windows editor may corrupt the file format.
154+
- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
159155
- For DR: The snapshot tools must be tested on DR node before DR is set up.
160-
- Monitor disk space regularly, automated log deletion is managed with the `--trim` option of the
161-
`azacsnap -c backup` for SAP HANA 2 and later releases.
162-
- **Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA
163-
system specified in the configuration file. If this node becomes unavailable, there is no mechanism to
164-
automatically start communicating with another node.
165-
- For an **SAP HANA Scale-Out with Standby** scenario it is typical to install and configure the snapshot
166-
tools on the master node. But, if the master node becomes unavailable, the standby node will take over
167-
the master node role. In this case, the implementation team should configure the snapshot tools on both
168-
nodes (Master and Stand-By) to avoid any missed snapshots. In the normal state, the master node will take
169-
HANA snapshots initiated by crontab, but after master node failover those snapshots will have to be
170-
executed from another node such as the new master node (former standby). To achieve this outcome, the standby
171-
node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured,
172-
`azacsnap.json` configured, and crontab commands staged in advance of the failover.
173-
- For an **SAP HANA HSR HA** scenario, it is recommended to install, configure, and schedule the
174-
snapshot tools on both (Primary and Secondary) nodes. Then, if the Primary node becomes unavailable,
175-
the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the
176-
Primary node will take HANA snapshots initiated by crontab and the Secondary node would attempt to take
177-
snapshots but fail as the Primary is functioning correctly. But after Primary node failover, those
178-
snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the
179-
snapshot tool installed, storage communication enabled, `hdbuserstore` configured, azacsnap.json
180-
configured, and crontab enabled in advance of the failover.
156+
- Monitor disk space regularly
157+
- Automated log deletion is managed with the `--trim` option of the `azacsnap -c backup` for SAP HANA 2 and later releases.
158+
- **Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA system specified in the configuration file. If this
159+
node becomes unavailable, there's no mechanism to automatically start communicating with another node.
160+
- For an **SAP HANA Scale-Out with Standby** scenario it's typical to install and configure the snapshot tools on the primary node. But, if the primary node becomes
161+
unavailable, the standby node will take over the primary node role. In this case, the implementation team should configure the snapshot tools on both
162+
nodes (Primary and Stand-By) to avoid any missed snapshots. In the normal state, the primary node will take HANA snapshots initiated by crontab. If the primary
163+
node fails over those snapshots will have to be executed from another node, such as the new primary node (former standby). To achieve this outcome, the standby
164+
node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured, `azacsnap.json` configured, and crontab commands staged
165+
in advance of the failover.
166+
- For an **SAP HANA HSR HA** scenario, it's recommended to install, configure, and schedule the snapshot tools on both (Primary and Secondary) nodes. Then, if
167+
the Primary node becomes unavailable, the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the Primary node
168+
will take HANA snapshots initiated by crontab. The Secondary node would attempt to take snapshots but fail as the Primary is functioning correctly. But,
169+
after Primary node failover, those snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the snapshot tool
170+
installed, storage communication enabled, `hdbuserstore` configured, `azacsnap.json` configured, and crontab enabled in advance of the failover.
181171

182172
## Guidance provided in this document
183173

0 commit comments

Comments
 (0)