You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/product/cluster_configuration/backup_system/veeam.md
+38-34Lines changed: 38 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -150,6 +150,7 @@ The minimum hardware specifications are:
150
150
-**Disk:** Sufficient storage to hold all active backups. This server acts as a staging area to transfer backups from OpenNebula to the Veeam repository, so its disk must be large enough to accommodate the total size of these backups.
151
151
152
152
## Veeam Backup Appliance Requirements
153
+
153
154
When adding OpenNebula as a platform into Veeam, a KVM appliance will be deployed (step 4.2) as a VM into OpenNebula. This appliance has the following minimum requirements:
154
155
155
156
-**CPU:** 6 cores
@@ -171,32 +172,34 @@ The next step is to create a backup datastore in OpenNebula. This datastore will
171
172
{{< alert title="Remember" color="success" >}}
172
173
The backup datastore must be created in the backup server configured in step 1. Also, remember to add this datastore to any cluster that you want to be able to back up.{{< /alert >}}
173
174
174
-
**Rsync Datastore**
175
+
2.1. Create the Rsync backup datastore
175
176
176
177
Here is an example of how to create an Rsync datastore in a Host named `backup-host` and then add it to a given cluster:
177
178
178
-
179
-
# Create the Rsync backup datastore
180
-
cat << EOF > /tmp/rsync-datastore.txt
181
-
NAME="VeeamDS"
182
-
DS_MAD="rsync"
183
-
TM_MAD="-"
184
-
TYPE="BACKUP_DS"
185
-
VEEAM_DS="YES"
186
-
RESTIC_COMPRESSION="-"
187
-
RESTRICTED_DIRS="/"
188
-
RSYNC_HOST="localhost"
189
-
RSYNC_USER="oneadmin"
190
-
SAFE_DIRS="/var/tmp"
191
-
EOF
192
-
193
-
onedatastore create /tmp/rsync-datastore.txt
194
-
195
-
# Add the datastore to the cluster with "onecluster adddatastore <cluster-name> <datastore-name>"
SELinux and AppArmor may cause some issues in the backup server if not configured properly. Either disable them or make sure to whitelist the datastore directories (``/var/lib/one/datastores``).
SELinux and AppArmor may cause issues in the backup server if not configured properly. Either disable them or make sure to provide permissions to the datastore directories (``/var/lib/one/datastores``).
200
203
{{< /alert >}}
201
204
202
205
You can find more details regarding the Rsync datastore in [Backup Datastore: Rsync]({{% relref "../../../product/cluster_configuration/backup_system/rsync.md" %}}).
@@ -209,7 +212,9 @@ If storage becomes a constraint, we recommend cleaning up the OpenNebula Backup
209
212
210
213
We provide alongside the ovirtapi package the ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb`` script to aid in cleaning up the backup datastore. This script can be set up as a cronjob in the backup server with the oneadmin user. The following crontab example will run the script every day at 12:00 am and delete the oldest images until the backup datastore is under 50% capacity:
For the ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb`` script to work you need to set the ONE_AUTH environment variable to a valid ``user:password`` pair that can delete the backup images. You may also set the ``MAX_USED_PERCENTAGE`` variable to a different threshold (set to 50% by default).{{< /alert >}}
@@ -243,16 +248,16 @@ Once the package is installed, a ``oneadmin`` user will be created. Please make
In RHEL and Alma environments, you may face issues with the passenger package dependencies (``mod_passenger`` and ``mod_ssl``). You may add the correct repository and install the packages with the following:
To add OpenNebula as a hypervisor to Veeam, configure it as an oVirt KVM Manager in Veeam and choose the IP address of the oVirtAPI module. You can follow the [official Veeam documentation](https://helpcenter.veeam.com/docs/vbrhv/userguide/connecting_manager.html?ver=6) for this step or follow the next steps:
254
259
255
-
#### 4.1 Add the new virtualization manager
260
+
4.1. Add the new virtualization manager
256
261
257
262
The first step should be to add the ovirtAPI Backup server to Veeam. Head over to **Backup Infrastructure**, then to **Managed Servers**, and then click **Add Manager**:
258
263
@@ -278,7 +283,7 @@ As a last step, you can click finish and the new ovirtAPI server should be liste
278
283
279
284

280
285
281
-
#### 4.2 Deploy the KVM appliance
286
+
4.2. Deploy the KVM appliance
282
287
283
288
In order for Veeam to be able to perform backup and restore operations, it must deploy a dedicated Virtual Machine to act as a worker. To deploy it, go to the **Backup Infrastructure** tab, then **Backup Proxies**, and click **Add Proxy**:
284
289
@@ -310,13 +315,13 @@ In the next step, Veeam will take care of deploying the appliance. Once finished
310
315
311
316

312
317
313
-
#### 4.3 Verification
318
+
4.3 Verification
314
319
315
320
If everything is set properly, you should be able to see the available Virtual Machines in the **Inventory** tab under the **Virtual Infrastructure** -> **oVirt KVM** section.
316
321
317
322

318
323
319
-
###Logging information
324
+
## Logging information
320
325
321
326
The ovirtapi server will generate logs in the following directory depending on the operating system used:
322
327
@@ -329,9 +334,11 @@ If you use the cleanup script provided at ``/usr/lib/one/ovirtapi-server/scripts
329
334
330
335
To improve image transfer speed, you can increase the number of concurrent processes to better utilize the backup server's resources. This is controlled by the ``PassengerMaxPoolSize`` parameter in your Apache configuration file.
331
336
337
+
After setting the ``PassengerMaxPoolSize``, you must balance RAM and CPU availability.
338
+
332
339
### Adjusting the Process Pool
333
340
334
-
You can find the configuration file in the following locations, depending on your distribution:
341
+
The configuration file is available in the following locations depending on your distribution:
0 commit comments