Skip to content

Commit c9b4fd8

Browse files
committed
M #-: Merge of 7.0 maintenance branch for 7.0.2 release
1 parent dac57e6 commit c9b4fd8

File tree

11 files changed

+201
-36
lines changed

11 files changed

+201
-36
lines changed
111 KB
Loading
139 KB
Loading

content/product/apps-marketplace/managing_marketplaces/marketapps.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -213,6 +213,15 @@ VMTEMPLATE
213213
ID: -1
214214
```
215215

216+
When an appliance is downloaded from the Marketplace, short hash values may be appended to object names to ensure uniqueness. This convention also applies to additional objects created during the download process, such as virtual machine templates and disks. To keep names easily identifiable, the original object name is preserved and placed before the appended hash and index. For example:
217+
```default
218+
$ oneimage list
219+
ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
220+
2 oneadmin oneadmin Windows VM Template_0-Contextualization Packages-aa38438e4a-2 default 2M CD No rdy 0
221+
1 oneadmin oneadmin Windows VM Template_0-Windows VirtIO Drivers - v0.1.285-e3cf06243b-1 default 754M CD No rdy 0
222+
0 oneadmin oneadmin Windows VM Template_0-Empty disk-fcee73d9e5-0 default 5G OS No rdy 0
223+
```
224+
216225
<a id="marketapp-download"></a>
217226

218227
You can also download an app to a standalone file in your desktop:

content/product/cluster_configuration/backup_system/veeam.md

Lines changed: 64 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,11 @@ The OpenNebula-Veeam&reg; Backup Integration works by exposing a native **oVirt-
4545
<td style="min-width: 100px; border: 1px solid; vertical-align: top; padding: 5px;"><p>Full Discovery</p></td>
4646
<td style="min-width: 100px; border: 1px solid; vertical-align: top; padding: 5px;"><p>Veeam automatically discovers and displays the OpenNebula cluster hierarchy (clusters, hosts, VMs, and storage).</p></td>
4747
</tr>
48+
<tr>
49+
<td style="min-width: 100px; border: 1px solid; vertical-align: top; padding: 5px;"><p><strong>Portability</strong></p></td>
50+
<td style="min-width: 100px; border: 1px solid; vertical-align: top; padding: 5px;"><p>VMWare import</p></td>
51+
<td style="min-width: 100px; border: 1px solid; vertical-align: top; padding: 5px;"><p>Enables restoring virtual machines backed up in Veeam from VMWare into OpenNebula.</p></td>
52+
</tr>
4853
</tbody>
4954
</table>
5055

@@ -108,16 +113,10 @@ The following table summarizes the supported backup modes for each storage syste
108113

109114
Here is a list of the known issues and limitations affecting the Veeam integration with OpenNebula:
110115

111-
- Setting the ``PassengerMaxPoolSize`` variable to values higher than 1 can trigger issues depending on the system properties of the backup server and the amount of concurrent transfers, showing an error in the Veeam Backup & Replication console. If this happens too frequently, reduce the amount of concurrent Passenger processes to 1 until this issue is fixed.
112116
- The KVM appliance in step 4.2 does not include context packages. This implies that in order to configure the networking of an appliance, you must either manually choose the first available free IP in the management network or set up a DHCP service router.
113-
- There is an identified bug with Ceph image datastores that avoids the opennebula-ovirtapi package from uploading images into these kind of datastores, making restores and appliance deployments fail.
114-
- If a virtual network is owned by a user other than oneadmin (or the user chosen as the Veeam administrator in step 4.1) you may face an error when listing available networks.
115-
- Alphine virtual machines cannot be backed up.
116-
- During image transfers, you may see a warning message stating ``Unable to use transfer URL for image transfer: Switched to proxy URL. Backup performance may be affected ``. This is expected and shouldn't affect performance.
117+
- Alpine virtual machines cannot be backed up.
118+
- During image transfers, you may see a warning message stating ``Unable to use transfer URL for image transfer: Switched to proxy URL. Backup performance may be affected``. This is expected and shouldn't affect performance.
117119
- Spaces are not allowed in Virtual Machine names in the integration, so avoid using them (even if they are allowed in OpenNebula itself), otherwise you may face issues when performing an in-place restores of said VMs.
118-
- Veeam may send multiple login token requests, which can cause to reach the OpenNebula token limit, causing backup failures if an backup was being performed when the token limit was reached.
119-
- Incremental backups may fail for VMs with more than 1 disk attached to them.
120-
- The number of VCPU in the KVM appliance may be set to 1 regardless of the configuration in Veeam. This can be solved by manually changing the number of vCPU in OpenNebula and restarting the VM.
121120

122121
## Architecture
123122

@@ -139,19 +138,23 @@ To ensure a compatible integration between OpenNebula and Veeam Backup, the foll
139138

140139
To ensure full compatibility with the ovirtAPI module, the Backup Server must run one of the following operating systems:
141140

142-
- AlmaLinux 8 or 9
141+
- AlmaLinux 9
143142
- Ubuntu 22.04 or 24.04
144-
- RHEL 8 or 9
145-
- Debian 11 or 12
143+
- RHEL 9
144+
- Debian 12
146145

147-
The recommended hardware specifications are:
146+
The minimum hardware specifications are:
148147

149-
- **CPU:** 4 cores
148+
- **CPU:** 8 cores
150149
- **Memory:** 16 GB RAM
151150
- **Disk:** Sufficient storage to hold all active backups. This server acts as a staging area to transfer backups from OpenNebula to the Veeam repository, so its disk must be large enough to accommodate the total size of these backups.
152151

153152
## Veeam Backup Appliance Requirements
153+
<<<<<<< HEAD
154154
When adding OpenNebula as a platform into Veeam, a KVM appliance will be deployed (step 4.2) as a VM into OpenNebula. This appliance has the following requirements:
155+
=======
156+
When adding OpenNebula as a platform into Veeam, a KVM appliance will be deployed (step 4.2) as a VM into OpenNebula. This appliance has the following minimum requirements:
157+
>>>>>>> one-7.0-maintenance
155158
156159
- **CPU:** 6 cores
157160
- **Memory:** 6 GB RAM
@@ -161,11 +164,11 @@ Please make sure that there is an OpenNebula host with enough capacity for this
161164

162165
## Installation and Configuration
163166

164-
1. Prepare the environment for the oVirtAPI Server
167+
### 1. Prepare the environment for the oVirtAPI Server
165168

166169
A server should be configured to expose both the Rsync Backup datastore and the oVirtAPI Server. This server should be accessible from all the clusters that you want to be able to back up via the management network shown in the architecture diagram. The oVirtAPI Server is going to act as the communication gateway between Veeam and OpenNebula.
167170

168-
2. Create a backup datastore
171+
### 2. Create a backup datastore
169172

170173
The next step is to create a backup datastore in OpenNebula. This datastore will be used by the oVirtAPI module to handle the backup of the Virtual Machines before sending the backup data to Veeam. Currently only [Rsync Datastore]({{% relref "../../../product/cluster_configuration/backup_system/rsync.md" %}}) is supported. An additional property called ``VEEAM_DS`` must exist in the backup datastore template and be set to ``YES``.
171174

@@ -215,7 +218,7 @@ We provide alongside the ovirtapi package the ``/usr/lib/one/ovirtapi-server/scr
215218
{{< alert title="Remember" color="success" >}}
216219
For the ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb`` script to work you need to set the ONE_AUTH environment variable to a valid ``user:password`` pair that can delete the backup images. You may also set the ``MAX_USED_PERCENTAGE`` variable to a different threshold (set to 50% by default).{{< /alert >}}
217220

218-
3. Install and configure the oVirtAPI module
221+
### 3. Install and configure the oVirtAPI module
219222

220223
In order to install the oVirtAPI module, you need to have the OpenNebula repository configured in the backup server. You can do so by following the instructions in [OpenNebula Repositories]({{% relref "../../../software/installation_process/manual_installation/opennebula_repository_configuration.md" %}}). Then, install the opennebula-ovirtapi package.
221224

@@ -249,19 +252,11 @@ In RHEL and Alma environments, you may face issues with the passenger package de
249252

250253
{{< /alert >}}
251254

252-
#### Performance Improvements
253-
To increase the performance of the oVirtAPI module, you may want to modify the ammount of processes assigned to it to better utilize the CPUs available in the backup server. To do so, modify the ``PassengerMaxPoolSize`` parameters in the Apache configuration file to match the available CPUs. Depending on your distro it can be located in the following directories:
254-
255-
* Debian/Ubuntu: ``/etc/apache2/sites-available/ovirtapi-server.conf``
256-
* Alma/RHEL: ``/etc/httpd/conf.d/ovirtapi-server.conf``
257-
258-
After performing changes, restart the ``httpd`` or ``apache`` service.
259-
260-
4. Add OpenNebula to Veeam
255+
### 4. Add OpenNebula to Veeam
261256

262257
To add OpenNebula as a hypervisor to Veeam, configure it as an oVirt KVM Manager in Veeam and choose the IP address of the oVirtAPI module. You can follow the [official Veeam documentation](https://helpcenter.veeam.com/docs/vbrhv/userguide/connecting_manager.html?ver=6) for this step or follow the next steps:
263258

264-
4.1 Add the new virtualization manager
259+
#### 4.1 Add the new virtualization manager
265260

266261
The first step should be to add the ovirtAPI Backup server to Veeam. Head over to **Backup Infrastructure**, then to **Managed Servers**, and then click **Add Manager**:
267262

@@ -287,7 +282,7 @@ As a last step, you can click finish and the new ovirtAPI server should be liste
287282

288283
![image](/images/veeam/hypervisor_added.png)
289284

290-
4.2 Deploy the KVM appliance
285+
#### 4.2 Deploy the KVM appliance
291286

292287
In order for Veeam to be able to perform backup and restore operations, it must deploy a dedicated Virtual Machine to act as a worker. To deploy it, go to the **Backup Infrastructure** tab, then **Backup Proxies**, and click **Add Proxy**:
293288

@@ -319,7 +314,7 @@ In the next step, Veeam will take care of deploying the appliance. Once finished
319314

320315
![image](/images/veeam/appliance_listed.png)
321316

322-
4.3 Verification
317+
#### 4.3 Verification
323318

324319
If everything is set properly, you should be able to see the available Virtual Machines in the **Inventory** tab under the **Virtual Infrastructure** -> **oVirt KVM** section.
325320

@@ -333,6 +328,47 @@ The ovirtapi server will generate logs in the following directory depending on t
333328
* Alma/RHEL: ``/var/log/httpd``
334329

335330
If you use the cleanup script provided at ``/usr/lib/one/ovirtapi-server/scripts/backup_clean.rb``, the cleanup logs will be placed at ``/var/log/one/backup_cleaner_script.log``.
331+
<<<<<<< HEAD
332+
=======
333+
334+
## Performance Improvements
335+
336+
To improve image transfer speed, you can increase the number of concurrent processes to better utilize the backup server's resources. This is controlled by the ``PassengerMaxPoolSize`` parameter in your Apache configuration file.
337+
338+
### Adjusting the Process Pool
339+
340+
You can find the configuration file in the following locations, depending on your distribution:
341+
342+
* Debian/Ubuntu: ``/etc/apache2/sites-available/ovirtapi-server.conf``
343+
* Alma/RHEL: ``/etc/httpd/conf.d/ovirtapi-server.conf``
344+
345+
After editing and saving the file, you must restart the webserver for the change to take effect:
346+
347+
* Debian/Ubuntu: ``sudo systemctl restart apache2``
348+
* Alma/RHEL: ``sudo systemctl restart httpd``
349+
350+
### Adjusting the Process Pool
351+
352+
When setting the ``PassengerMaxPoolSize``, you must balance RAM and CPU availability.
353+
354+
**Memory**
355+
356+
Each active Passenger process consumes approximately 150-200 MB of RAM. You can use the following formula as a starting point to determine a safe maximum, leaving a 30% buffer for the OS and other services:
357+
358+
``(TOTAL_SERVER_RAM_MB * 0.70) / 200 = Recommended MaxPoolSize``
359+
360+
**CPU**
361+
362+
While increasing the pool size, monitor your CPU usage during active transfers. If the CPU load becomes the bottleneck (consistently high usage), adding more processes won't increase speed and may even slow things down. In that case, you will need to increase the number of CPUs or vCPUs assigned to the backup server.
363+
364+
### Interpreting Veeam Job Statistics
365+
366+
The Veeam job statistics window shows a breakdown of the load, which is crucial for identifying the true bottleneck in your backup chain.
367+
368+
* **Source:** This represents your backup server. A high load (e.g., 99%) here is ideal. It means your server is working at full capacity and that the bottleneck is correctly placed on the source, not on other components.
369+
* **Proxy:** This is the KVM appliance deployed by Veeam. If its load is consistently high (e.g., >90%), it is the bottleneck and requires more resources (vCPU/RAM).
370+
* **Network:** This indicates that the transfer speed is being limited by the available bandwidth on the management network connecting the components.
371+
>>>>>>> one-7.0-maintenance
336372
337373
## Volatile disk backups
338374

content/product/cluster_configuration/hosts_and_clusters/cluster_guide.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,3 +146,58 @@ The [Sunstone UI interface]({{% relref "../../control_plane_configuration/graphi
146146
- See cluster details and update overcommitment.
147147

148148
![details_cluster](/images/sunstone_cluster_details.png)
149+
150+
## Enhanced VM Compatibility (EVC)
151+
152+
The Enhanced VM Compatibility (EVC) feature facilitates the management of heterogeneous OpenNebula clusters by masking host CPU capabilities to enforce a unified base model. Using a lowest-common-denominator approach ensures CPU compatibility across hosts and enables seamless live migration of Virtual Machines between hosts with different processor generations.
153+
154+
EVC is configured at the cluster level. This simplifies management and improves scalability by allowing administrators to add newer hardware to existing clusters without preventing VM migration due to CPU differences.
155+
156+
### Using EVC with the CLI
157+
158+
1. Inspect the cluster to view the current template and attributes:
159+
160+
```bash
161+
$ onecluster show default
162+
```
163+
164+
Look for the `CLUSTER TEMPLATE` section. If `EVC_MODE` is not present, it has not been configured for the cluster.
165+
166+
2. Set the `EVC_MODE` attribute on a cluster using `onecluster update`. For example, to set a Sandy Bridge baseline:
167+
168+
```bash
169+
$ onecluster update default"
170+
```
171+
172+
Then add the `EVC_MODE` attribute to the list:
173+
174+
```bash
175+
...
176+
EVC_MODE="sandybridge"
177+
...
178+
```
179+
180+
The exact CPU model string depends on the hypervisor's supported CPU map. You can view the supported cpu models of a given host with the following command under the `KVM_CPU_MODELS` key:
181+
182+
```bash
183+
$ onehost show <host-id> -j
184+
```
185+
186+
Make sure to select a cpu model available in all hosts in the cluster, otherwise you may fail to deploy VMs on unsupported hosts.
187+
188+
189+
3. To revert or remove EVC, update the cluster template to remove the `EVC_MODE` attribute (for example by setting it to an empty string or re-applying a template without the attribute).
190+
191+
### Using EVC with Sunstone
192+
193+
The Fireedge / Sunstone web UI provides a convenient way to enable and change EVC without editing templates directly.
194+
195+
1. Open the Infrastructure → Clusters view and select the cluster you want to configure.
196+
197+
2. Click the Update button and go to the Select Hosts tabs, there you will see the EVC Mode section. To enable EVC, choose a model from the droplist. Afterwards you can click Finish.
198+
199+
![Update cluster EVC mode](/images/sunstone_cluster_evc_update.png)
200+
201+
3. Once the EVC mode is set, you can see it listed in the cluster attributes the same way as if the CLI had been used.
202+
203+
![EVC set in cluster attributes](/images/sunstone_cluster_evc_attributes.png)

content/product/cluster_configuration/san_storage/netapp.md

Lines changed: 16 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,6 @@ $ cat netapp_system.ds
179179
NAME = "netapp_system"
180180
TYPE = "SYSTEM_DS"
181181
DISK_TYPE = "BLOCK"
182-
DS_MAD = "netapp"
183182
TM_MAD = "netapp"
184183
NETAPP_HOST = "10.1.234.56"
185184
NETAPP_USER = "admin"
@@ -219,6 +218,7 @@ $ cat netapp_image.ds
219218
NAME = "netapp_image"
220219
TYPE = "IMAGE_DS"
221220
DISK_TYPE = "BLOCK"
221+
DS_MAD = "netapp"
222222
TM_MAD = "netapp"
223223
NETAPP_HOST = "10.1.234.56"
224224
NETAPP_USER = "admin"
@@ -241,9 +241,10 @@ Since Volumes contain the LUNs and snapshots, they are by default configured to
241241
| Attribute | Description |
242242
| ------------------------- | ----------------------------------------------------- |
243243
| `NETAPP_SUFFIX` | Volume/LUN name suffix. |
244-
| `NETAPP_GROW_THRESHOLD` | Volume autogrow threshold in percent. Default: 96 |
244+
| `NETAPP_GROW_THRESHOLD` | Volume autogrow threshold in percent. Default: 90 |
245245
| `NETAPP_GROW_RATIO` | Volume maximum autogrow ratio. Default: 2 |
246246
| `NETAPP_SNAPSHOT_RESERVE` | Volume snapshot reserve in percent. Default: 10 |
247+
| `NETAPP_STANDALONE` | Volume FlexClones always split. Default: NO |
247248

248249
{{< alert title="Note" color="success" >}}
249250
Volumes will be created with the extra reservation space in mind, which will be `size * ( 1 + NETAPP_SNAPSHOT_RESERVE / 100 )`.
@@ -258,7 +259,7 @@ Volumes will be created with the extra reservation space in mind, which will be
258259
- Image datastore: `one_<datastore_id>_<image_id>` (volume), `one_<datastore_id>_<image_id>_lun` (LUN)
259260
- System datastore: `one_<vm_id>_disk_<disk_id>` (volume), `one_<datastore_id>_<vm_id>_disk_<disk_id>_lun` (LUN)
260261
- **Operations:**
261-
- Non‐persistent: FlexClone, then split
262+
- Non‐persistent: FlexClone, optionally split when `NETAPP_STANDALONE="YES"`
262263
- Persistent: Rename
263264

264265
Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the LUNs have been mapped.
@@ -267,9 +268,19 @@ Symbolic links from the System datastore will be created for each Virtual Machin
267268
The minimum size for a NetApp volume is 20 MB, so any disk smaller than that will result in a 20 MB volume; however, the LUN inside will be the correct size.
268269
{{< /alert >}}
269270

270-
## Known Issues
271+
**Backups process details:**
272+
273+
Both Full and Incremental backups are supported by NetApp. For Full Backups, a snapshot of the Volume containing the VM disk LUN is taken and attached to the host, where it is converted into a qcow2 image and uploaded to the backup datastore.
274+
275+
Incremental backups are created by first creating the base full backup from the snapshot however this snapshot is then retained on the NetApp Volume rather than deleted after the backup is taken. When another incremental backup is taken, a new snapshot is taken and both the previous and current snapshots are cloned to new Volumes where they are attached to the host and compared for differences at the block level. These block changes are stored in a sparse QCOW2 file backed by the previous snapshot, which is then uploaded to the backup datastore. The old snapshot is then removed while the new one is retained. When incremental backups are restored, the backing chain is rebuilt before restoring the backup to the VM disk.
271276

272-
Currently the NetApp password on the Datastore is not encrypted due to a typo in the configuration file `/etc/one/oned.conf`. To encrypt this password, the Encrypted Attributes section of this file you must change `DATASTORE_ENCRYPTED_ATTR = "NETAPP_PASSWORD"` to `DATASTORE_ENCRYPTED_ATTR = "NETAPP_PASS"` and then restart OpenNebula.
277+
{{< alert title="Note" color="success" >}}
278+
You can configure the block size ( Defualt 2097152 B / 2 MB ) for incremental backups by modifing the file at `/var/tmp/one/etc/tm/san/backup.conf`
279+
{{< /alert >}}
280+
281+
{{< alert title="Warning" color="warning" >}}
282+
The incremental backup feature of NetApp requires the `nbd` kernel module to be loaded and the `nbdfuse` package to be installed on all OpenNebula nodes.
283+
{{< /alert >}}
273284

274285
## System Considerations
275286

0 commit comments

Comments
 (0)