You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-netapp-files/azacsnap-get-started.md
+36-46Lines changed: 36 additions & 46 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,18 +22,17 @@ This article provides a guide for installing the Azure Application Consistent Sn
22
22
23
23
## Getting the snapshot tools
24
24
25
-
It is recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapdownload) from Microsoft.
25
+
It's recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapinstaller) from Microsoft.
26
26
27
-
The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature) which is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
27
+
The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature). This file is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
28
28
29
29
Once these downloads are completed, then follow the steps in this guide to install.
30
30
31
31
### Verifying the download
32
32
33
-
The installer, which is downloadable per above, has an associated PGP signature file with an `.asc`
34
-
filename extension. This file can be used to ensure the installer downloaded is a verified
35
-
Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here
36
-
(<https://packages.microsoft.com/keys/microsoft.asc>) and has been used to sign the signature file.
33
+
The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
34
+
Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here (<https://packages.microsoft.com/keys/microsoft.asc>)
35
+
and has been used to sign the signature file.
37
36
38
37
The Microsoft PGP Public Key can be imported to a user's local as follows:
39
38
@@ -46,7 +45,7 @@ The following commands trust the Microsoft PGP Public Key:
46
45
47
46
1. List the keys in the store.
48
47
2. Edit the Microsoft key.
49
-
3. Check the fingerprint with `fpr`
48
+
3. Check the fingerprint with `fpr`.
50
49
4. Sign the key to trust it.
51
50
52
51
```bash
@@ -107,7 +106,7 @@ gpg: Good signature from "Microsoft (Release signing)
| MDC Single Tenant | 2.0 SPS 01 | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.*|
138
+
| MDC Multiple Tenants | 2.0 SPS 04 | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
139
+
> \* SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02
140
+
140
141
141
142
## Important things to remember
142
143
143
144
- After the setup of the snapshot tools, continuously monitor the storage space available and if
144
145
necessary, delete the old snapshots on a regular basis to avoid storage fill out.
145
146
- Always use the latest snapshot tools.
146
147
- Use the same version of the snapshot tools across the landscape.
147
-
- Test the snapshot tools and get comfortable with the parameters required and output of the
148
-
command before using in the production system.
149
-
- When setting up the HANA user for backup (details below in this document), you need to
150
-
set up the user for each HANA instance. Create an SAP HANA user account to access HANA
151
-
instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container
152
-
environment, it can be set up under the tenant database.
153
-
- Customers must provide the SSH public key for storage access. This action must be done once per
154
-
node and for each user under which the command is executed.
148
+
- Test the snapshot tools to understand the parameters required and their behavior, along with the log files, before deployment into production.
149
+
- When setting up the HANA user for backup, you need to set up the user for each HANA instance. Create an SAP HANA user account to access HANA
150
+
instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container environment, it can be set up under the tenant database.
151
+
- Customers must provide the SSH public key for storage access. This action must be done once per node and for each user under which the command is executed.
155
152
- The number of snapshots per volume is limited to 250.
156
-
- If manually editing the configuration file, always use a Linux text editor such as "vi" and not
157
-
Windows editors like Notepad. Using Windows editor may corrupt the file format.
158
-
- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
153
+
- If manually editing the configuration file, always use a Linux text editor such as "vi" and not Windows editors like Notepad. Using Windows editor may corrupt the file format.
154
+
- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
159
155
- For DR: The snapshot tools must be tested on DR node before DR is set up.
160
-
- Monitor disk space regularly, automated log deletion is managed with the `--trim` option of the
161
-
`azacsnap -c backup` for SAP HANA 2 and later releases.
162
-
-**Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA
163
-
system specified in the configuration file. If this node becomes unavailable, there is no mechanism to
164
-
automatically start communicating with another node.
165
-
- For an **SAP HANA Scale-Out with Standby** scenario it is typical to install and configure the snapshot
166
-
tools on the master node. But, if the master node becomes unavailable, the standby node will take over
167
-
the master node role. In this case, the implementation team should configure the snapshot tools on both
168
-
nodes (Master and Stand-By) to avoid any missed snapshots. In the normal state, the master node will take
169
-
HANA snapshots initiated by crontab, but after master node failover those snapshots will have to be
170
-
executed from another node such as the new master node (former standby). To achieve this outcome, the standby
171
-
node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured,
172
-
`azacsnap.json` configured, and crontab commands staged in advance of the failover.
173
-
- For an **SAP HANA HSR HA** scenario, it is recommended to install, configure, and schedule the
174
-
snapshot tools on both (Primary and Secondary) nodes. Then, if the Primary node becomes unavailable,
175
-
the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the
176
-
Primary node will take HANA snapshots initiated by crontab and the Secondary node would attempt to take
177
-
snapshots but fail as the Primary is functioning correctly. But after Primary node failover, those
178
-
snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the
179
-
snapshot tool installed, storage communication enabled, `hdbuserstore` configured, azacsnap.json
180
-
configured, and crontab enabled in advance of the failover.
156
+
- Monitor disk space regularly
157
+
- Automated log deletion is managed with the `--trim` option of the `azacsnap -c backup` for SAP HANA 2 and later releases.
158
+
-**Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA system specified in the configuration file. If this
159
+
node becomes unavailable, there's no mechanism to automatically start communicating with another node.
160
+
- For an **SAP HANA Scale-Out with Standby** scenario it's typical to install and configure the snapshot tools on the primary node. But, if the primary node becomes
161
+
unavailable, the standby node will take over the primary node role. In this case, the implementation team should configure the snapshot tools on both
162
+
nodes (Primary and Stand-By) to avoid any missed snapshots. In the normal state, the primary node will take HANA snapshots initiated by crontab. If the primary
163
+
node fails over those snapshots will have to be executed from another node, such as the new primary node (former standby). To achieve this outcome, the standby
164
+
node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured, `azacsnap.json` configured, and crontab commands staged
165
+
in advance of the failover.
166
+
- For an **SAP HANA HSR HA** scenario, it's recommended to install, configure, and schedule the snapshot tools on both (Primary and Secondary) nodes. Then, if
167
+
the Primary node becomes unavailable, the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the Primary node
168
+
will take HANA snapshots initiated by crontab. The Secondary node would attempt to take snapshots but fail as the Primary is functioning correctly. But,
169
+
after Primary node failover, those snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the snapshot tool
170
+
installed, storage communication enabled, `hdbuserstore` configured, `azacsnap.json` configured, and crontab enabled in advance of the failover.
0 commit comments