You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/product/cluster_configuration/backup_system/veeam.md
+28-6Lines changed: 28 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,6 +111,11 @@ Here is a list of the known missing features or bugs related to the Veeam integr
111
111
112
112
- Setting the ``PassengerMaxPoolSize`` variable to values higher than 1 can trigger issues depending on the system properties of the backup server and the amount of concurrent transfers, showing an error in the Veeam Backup & Replication console. If this happens too frequently, reduce the amount of concurrent Passenger processes to 1 until this issue is fixed.
113
113
- The KVM appliance in step 4.2 does not include context packages. This implies that in order to configure the networking of an appliance, you must either manually choose the first available free IP in the management network or set up a DHCP service router.
114
+
- There is an identified bug with Ceph image datastores that avoids the opennebula-ovirtapi package from uploading images into these kind of datastores, making restores and appliance deployments fail.
115
+
- If a virtual network is owned by a user other than oneadmin (or the user chosen as the Veeam administrator in step 4.1) you may face an error when listing available networks.
116
+
- Alphine virtual machines cannot be backed up.
117
+
- During image transfers, you may see a warning message stating ``Unable to use transfer URL for image transfer: Switched to proxy URL. Backup performance may be affected ``. This is expected and shouldn't affect performance.
118
+
- Spaces are not allowed in Virtual Machine names in the integration, so avoid using them (even if they are allowed in OpenNebula itself), otherwise you may face issues when performing an in-place restores of said VMs.
114
119
115
120
### Architecture
116
121
@@ -143,6 +148,15 @@ The recommended hardware specifications are:
143
148
-**Memory:** 16 GB RAM
144
149
-**Disk:** Sufficient storage to hold all active backups. This server acts as a staging area to transfer backups from OpenNebula to the Veeam repository, so its disk must be large enough to accommodate the total size of these backups.
145
150
151
+
## Veeam Backup Appliance Requirements
152
+
When adding OpenNebula as a platform into Veeam, a KVM appliance will be deployed (step 4.2) as a VM into OpenNebula. This appliance has the following requirements:
153
+
154
+
-**CPU:** 6 cores
155
+
-**Memory:** 6 GB RAM
156
+
-**Disk:** 100 GB
157
+
158
+
Please make sure that there is an OpenNebula host with enough capacity for this appliance. The system and image datastores should also be able to accomodate the disk storage requirement.
159
+
146
160
## Installation and Configuration
147
161
148
162
1. Prepare the environment for the oVirtAPI Server
@@ -192,22 +206,28 @@ The backup datastore needs to have enough space to hold the disks of the VMs tha
192
206
193
207
If storage becomes a constraint, we recommend cleaning up the OpenNebula Backup datastore regularly in order to minimize the storage requirement, but keep in mind that this will reset the backup chain and force Veeam to perform a full backup and download the entire image during the next backup job.
194
208
195
-
We provide alongside the ovirtapi package the ``/usr/share/one/backup_clean.rb`` script to aid in cleaning up the backup datastore. This script can be set up as a cronjob in the backup server with the oneadmin user. The following crontab example will run the script every day at 12:00 am and delete the oldest images until the backup datastore is under 50% capacity:
209
+
We provide alongside the ovirtapi package the ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb`` script to aid in cleaning up the backup datastore. This script can be set up as a cronjob in the backup server with the oneadmin user. The following crontab example will run the script every day at 12:00 am and delete the oldest images until the backup datastore is under 50% capacity:
For the ``/usr/share/one/backup_clean.rb`` script to work you need to set the ONE_AUTH environment variable to a valid ``user:password`` pair that can delete the backup images. You may also set the ``MAX_USED_PERCENTAGE`` variable to a different threshold (set to 50% by default).{{< /alert >}}
214
+
For the ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb`` script to work you need to set the ONE_AUTH environment variable to a valid ``user:password`` pair that can delete the backup images. You may also set the ``MAX_USED_PERCENTAGE`` variable to a different threshold (set to 50% by default).{{< /alert >}}
201
215
202
216
3. Install and configure the oVirtAPI module
203
217
204
218
In order to install the oVirtAPI module, you need to have the OpenNebula repository configured in the backup server. You can do so by following the instructions in [OpenNebula Repositories]({{% relref "../../../software/installation_process/manual_installation/opennebula_repository_configuration.md" %}}). Then, install the opennebula-ovirtapi package.
205
219
206
220
The configuration file can be found at ``/etc/one/ovirtapi-server.yml``. You should change the following variables before starting the service:
207
221
208
-
*``one_xmlrpc``: Address of the OpenNebula Front-end.
222
+
*``one_xmlrpc``: Address of the OpenNebula Front-end. Please do not include any prefixes such as ``http://``, only the IP address itself is needed.
209
223
*``endpoint_port``: Port used by the OpenNebula RPC endpoint (defaults to 2633).
210
-
*``public_ip``: Address that Veeam is going to use to communicate with the ovirtapi server.
224
+
*``public_ip``: Address that Veeam is going to use to communicate with the ovirtapi server.
225
+
226
+
{{< alert title="Important" color="success" >}}
227
+
You may see the 5554 port in the ``public_ip`` variable in the default settings, this is no longer needed so avoid using it. Leave only the IP address in the variable, no port needed.
228
+
229
+
You may also have a variable named ``instance_id``, which you should delete if you are running a version of the package >=7.0.1.
230
+
{{< /alert >}}
211
231
212
232
During installation a self-signed certificate is generated at ``/etc/one/ovirtapi-ssl.crt`` for encryption. You can replace this certificate with your own and change the ``cert_path`` configuration variable.
213
233
@@ -255,7 +275,9 @@ This will open a new dialog box. In the address field, you must make sure that i
255
275
256
276

257
277
258
-
On the **Credentials** tab, you should set the user and password used to access the OpenNebula Front-end. You can either choose the oneadmin user or create a new user with the same privileges as oneadmin. If you are using the default certificate, you may receive an untrust certificate warning, which you can disregard:
278
+
On the **Credentials** tab, you should set the user and password used to access the OpenNebula Front-end. You can either choose the oneadmin user or create a new user with the same privileges as oneadmin. Please remember that this user is an OpenNebula user, NOT a system user, meaning that this is a user such as the ones used to access the OpenNebula Fireedge web interface, which should be listed in the System/Users tab of Fireedge or through the CLI with ``oneuser list``.
279
+
280
+
If you are using the default certificate, you may receive an untrust certificate warning, which you can disregard:
259
281
260
282

261
283
@@ -308,7 +330,7 @@ The ovirtapi server will generate logs in the following directory depending on t
308
330
* Ubuntu/Debian: ``/var/log/apache2``
309
331
* Alma/RHEL: ``/var/log/httpd``
310
332
311
-
If you use the cleanup script provided at ``/usr/share/one/backup_clean.rb``, the cleanup logs will be placed at ``/var/log/one/backup_cleaner_script.log``.
333
+
If you use the cleanup script provided at ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb``, the cleanup logs will be placed at ``/var/log/one/backup_cleaner_script.log``.
0 commit comments