From 40328facfa7d9054ab064d8671d222f0103dd624 Mon Sep 17 00:00:00 2001 From: Neal Hansen Date: Wed, 10 Sep 2025 20:30:06 +0000 Subject: [PATCH] OpenNebula NetApp integration documentation ready for merge Signed-off-by: Neal Hansen --- opennebula/_index.yml | 19 +++ opennebula/opennebula-ontap.adoc | 227 ++++++++++++++++++++++++++++ opennebula/opennebula-overview.adoc | 79 ++++++++++ opennebula/sidebar.yml | 10 ++ 4 files changed, 335 insertions(+) create mode 100644 opennebula/_index.yml create mode 100644 opennebula/opennebula-ontap.adoc create mode 100644 opennebula/opennebula-overview.adoc create mode 100644 opennebula/sidebar.yml diff --git a/opennebula/_index.yml b/opennebula/_index.yml new file mode 100644 index 0000000..170c834 --- /dev/null +++ b/opennebula/_index.yml @@ -0,0 +1,19 @@ +indexpage: + title: NetApp Solutions for OpenNebula + lead: "NetApp Virtualization Solutions are a set of strategic and technology capabilities that demonstrate the capabilities of NetApp storage for virtualization using OpenNebula." + summary: "NetApp Virtualization Solutions are a set of strategic and technology capabilities that demonstrate the capabilities of NetApp storage for virtualization using OpenNebula." + tiles: + - title: "Solutions" + links: + - title: "Overview" + url: /opennebula-overview.html + - title: "NetApp storage for OpenNebula" + url: /opennebula-ontap.html +# + - title: "Additional Resources" + links: + - title: "Installing OpenNebula" + url: https://docs.opennebula.io/7.0/software/installation_process/ + - title: "Configuring NetApp SAN Datastore for OpenNebula Enterprise Edition" + url: https://docs.opennebula.io/7.0/integrations/storage_extensions/netapp/ +# \ No newline at end of file diff --git a/opennebula/opennebula-ontap.adoc b/opennebula/opennebula-ontap.adoc new file mode 100644 index 0000000..8c58985 --- /dev/null +++ b/opennebula/opennebula-ontap.adoc @@ -0,0 +1,227 @@ +--- +sidebar: sidebar +permalink: opennebula/opennebula-ontap.html +keywords: netapp, opennebula, libvirt, kvm, qemu, lxc, vm, all-flash, nfs, iscsi, lvm, ontap, storage, aff +summary: Shared storage in OpenNebula clusters enables fast VM live migration, centralized backups, and consistent image management across hosts. NetApp ONTAP storage can support OpenNebula system and image datastores, while also providing guest VMs with file, block, or object storage when needed. +--- += OpenNebula Clusters with ONTAP +:hardbreaks: +:nofooter: +:icons: font +:linkattrs: +:imagesdir: ../media/ + +[.lead] +Shared storage in OpenNebula clusters enables fast VM live migration, centralized backups, and consistent image management across hosts. NetApp ONTAP storage can support OpenNebula system and image datastores, while also providing guest VMs with file, block, or object storage when needed. + +KVM hosts need to have FC, Ethernet, or other supported interfaces cabled to switches and have communication to ONTAP logical interfaces. + +Always check https://mysupport.netapp.com/matrix/#welcome[Interoperability Matrix Tool] for supported configurations. + +== High-level ONTAP Features + +*Common features* + +* Scale out Cluster +* Secure Authentication and RBAC support +* Zero trust multi admin support +* Secure Multitenancy +* Replicate data with SnapMirror. +* Point in time copies with Snapshots. +* Space efficient clones. +* Storage efficiency features like dedupe, compression, etc. +* Trident CSI support for Kubernetes +* Snaplock +* Tamperproof Snapshot copy locking +* Encryption support +* FabricPool to tier cold data to object store. +* BlueXP and Data Infrastructure Insights Integration. +* Microsoft offloaded data transfer (ODX) + +*NAS* + +* FlexGroup volumes are a scale out NAS container, providing high performance along with load distribution and scalability. +* FlexCache allows data to be distributed globally and still provides local read and write access to the data. +* Multiprotocol support enables the same data to be accessible via SMB, as well as NFS. +* NFS nConnect allows multiple TCP sessions per TCP connection increasing network throughput. This increases utilization of high speed nics available on modern servers. +* NFS session trunking provides increased data transfer speeds, high availability and fault tolerance. +* pNFS for optimized data path connection. +* SMB multichannel provides increased data transfer speed, high availability and fault tolerance. +* Integration with Active directory/LDAP for file permissions. +* Secure connection with NFS over TLS. +* NFS Kerberos support. +* NFS over RDMA. +* Name mapping between Windows and Unix identities. +* Autonomous ransomware protection. +* File System Analytics. + +*SAN* + +* Stretch cluster across fault domains with SnapMirror active sync. +* ASA models provide active/active multipathing and fast path failover. +* Support for FC, iSCSI, NVMe-oF protocols. +* Support for iSCSI CHAP mutual authentication. +* Selective LUN Map and Portset. + +== OpenNebula cluster storage types supported with ONTAP + +OpenNebula supports multiple storage backends, but in the context of NetApp integration the following three methods are fully supported and validated: + +[width=100%,cols="30% 20% 20% 20%",frame=all,grid=all,options="header"] +|=== +| Feature | NetApp ONTAP API | LVM-thin | NFS +| VM disks | Yes | Yes | Yes +| Image storage^1^ | Yes | Yes | Yes +| Live snapshots | Yes | Yes | Yes +| Clone VM or image | Yes | Yes | Yes +| Incremental backup^2^ | Yes | Yes | Yes +|=== + +*Notes:* + +1. Image storage refers to using the backend for OpenNebula image datastores. LVM-thin and ONTAP API methods involve copying or creating block devices from the image source. +2. Incremental backups work with `qcow2` disks (on NFS) or with block devices that support tracking changes. The OpenNebula NetApp ONTAP driver uses rolling snapshots to make incremental backups. IMPORTANT: incremental backups require the `nbd` kernel module to be loaded. + +== Helpful Commands + +These are a few sections of helpful commands that are used in one or more places during setup. + +[[iscsi-prereqs]] +=== Host iSCSI & multipath prerequisites +. Install packages: + * RHEL-like: `device-mapper-multipath`, `iscsi-initiator-utils` + * Debian-based: `multipath-tools`, `open-iscsi` +. Enable services (persist across reboots): ++ +[source,shell] +---- +sudo systemctl enable --now iscsid # open-iscsi for Debian-based +sudo systemctl enable --now multipathd +---- +. Discover *all* iSCSI LIF portals (creates node records; safe to do before LUNs exist): ++ +[source,shell] +---- +iscsiadm -m discovery -t sendtargets -p +iscsiadm -m discovery -t sendtargets -p +iscsiadm -m node -o show +---- + +[[iscsi-login]] +=== Host iSCSI login & verify +. Log in to all discovered nodes and confirm sessions: ++ +[source,shell] +---- +iscsiadm -m node --login +iscsiadm -m session -o show -P3 # expect all sessions LOGGED_IN +iscsiadm -m node --op update -n node.start -v automatic # auto-login on reboot +---- +. Verify multipath and device creation if LUNs are already mapped: ++ +[source,shell] +---- +multipath -ll +ls -l /dev/mapper # expect dm-mapped ONTAP LUNs +---- +*NOTE*: Do not expect devices unless you have already created the LUN and mapped it to the initiator group. + +[[nbd-enable]] +=== Host NBD module enable and persistent configuration +. Load the NBD module for the current boot session. You can use `max_part=#` to define a maximum number of partitions on each device, however the incremental backups do not require these devices and just require the module. ++ +[source,shell] +---- +modprobe nbd +---- +. Make the module load on boot: + - Debian-based: `echo nbd | sudo tee -a /etc/modules` + - RHEL-like: `echo nbd | sudo tee /etc/modules-load.d/nbd.conf` +. Update boot files: + - Debian-based; `sudo update-initramfs -u` + - RHEL-like: `sudo dracut -f` + +== NetApp ONTAP API Driver + +OpenNebula’s native NetApp integration uses ONTAP’s API to automatically create and manage volumes, LUNs, snapshots, and mappings. This method offers the best level of automation and avoids manual iSCSI and LVM setup. Also, having the link:https://docs.opennebula.io/7.0/integrations/storage_extensions/netapp/[OpenNebula documentation] available for these steps will provide more information about creating these resources in ONTAP. + +=== Storage Configuration Tasks + +. Enable iSCSI protocol in the ONTAP SVM (Storage VM). Follow link:https://docs.netapp.com/us-en/ontap/san-management/index.html[ONTAP 9 SAN Storage Management] for more information. ++ +image::opennebula-ontap-image01.png[iSCSI protocol enabled] +. Create at least two iSCSI LIF (logical interfaces) per controller for multipath access. Follow the steps found in the above link. ++ +image::opennebula-ontap-image03.png[iSCSI LIFs] +. Configure an initiator group (igroup) containing the IQNs of all OpenNebula hosts. Follow the steps found in the above link. Each host's IQN can be found or defined in the `/etc/iscsi/initiatorname.iscsi` file (if you modify this, log out of all iscsi sessions and restart iscsid with `systemctl restart iscsid` before logging back in). +. Create an ONTAP role and user account with ONTAP REST API access scoped to the target SVM. This user will be used by the NetApp driver in OpenNebula. See link:https://docs.netapp.com/us-en/ontap-automation/rest/rbac_overview.html[Work with users and roles] ONTAP documentation for more information. Keep note of the Username and Password, to be used in the Virtualization Configuration Tasks. +. Gather the SVM iSCSI Target IQN and UUIDs for the following resources for use in the Virtualization Configuration Tasks: + - The SVM + - The Aggregate(s) / Tier(s) to be used + - The igroup with the OpenNebula hosts ++ +[source,shell] +---- +NETAPP_SVM="ad32e4a7-f436-11ef-bcf8-d039ea927bab" +NETAPP_TARGET="iqn.1992-08.com.netapp:sn.ad32e4a7f43611efbcf8d039ea927bab:vs.3" +NETAPP_AGGREGATES="8569ee25-f7c5-41f0-9497-877ff01e0f91" +NETAPP_IGROUP="9591dea7-2c2f-11f0-bdde-d039ea927bab" +---- + + +=== Virtualization Configuration Tasks + +Having the link:https://docs.opennebula.io/7.0/integrations/storage_extensions/netapp/[OpenNebula documentation] available for these steps will provide more information about creating these resources. + +. Ensure the <> section has been completed. +. Complete the <> section. +. Enable `nbd` kernel module in order to use incremental backups. This can be done temporarily by running `sudo modprobe nbd`, however you should also add `nbd` to your `/etc/modules` and then regenerate the initramfs with `sudo update-initramfs -u`. +. Ensure automatic iSCSI login and multipath configuration for LUN detection and failover. +. Add new image datastore in OpenNebula with `DS_MAD=netapp` and `TM_MAD=netapp`, and a system datastore with `TM_MAD=netapp` (system datastores do not use DS_MAD). Refer to the OpenNebula Documentation mentioned above for all required and optional attributes. +. These two datastores will be nearly identical, the only difference being that System Datastores do not use `DS_MAD` and the `TYPE` is `SYSTEM_DS` rather than `IMAGE_DS`. Please refer to the OpenNebula Documentation linked above for examples. + +== LVM-thin (iSCSI) + +This integration uses NetApp iSCSI LUNs in combination with LVM-thin on the OpenNebula hosts. It provides reliable shared block storage with native LVM snapshot support and requires some manual configuration. + +=== Storage Configuration Tasks + +. Enable iSCSI protocol on the ONTAP SVM. Follow link:https://docs.netapp.com/us-en/ontap/san-management/index.html[ONTAP 9 SAN Storage Management] for more information. +. Create at least two LIFs per controller for HA and performance (multipath). Follow the steps found in the above link. +. Configure an initiator group (igroup) containing the IQNs of all OpenNebula hosts. Follow the steps found in the above link. Each host's IQN can be found or defined in the `/etc/iscsi/initiatorname.iscsi` file (if you modify this, log out of all iscsi sessions and restart iscsid with `systemctl restart iscsid` before logging back in). +. Create a Volume and corresponding LUN sized according to your intended datastore capacity and map them to the initiator group. Follow the steps found in the above link. ++ +image::opennebula-ontap-image04.png[Add LVM Volume] + +=== Virtualization Configuration Tasks + +Having the link:https://docs.opennebula.io/7.0/solutions/certified_hw_platforms/san_appliances/netapp_-_lvm_thin_validation/[OpenNebula NetApp LVM Documentation] available for these steps will provide more information about creating these resources. Also, the generic link:https://docs.opennebula.io/7.0/product/cluster_configuration/storage_system/lvm_drivers/[OpenNebula SAN Datastore] documentation will be helpful. + +. Ensure the <> section has been completed. +. Complete the <> section. +. Use `pvcreate` and `vgcreate` to prepare the LUN(s) as shared LVM volume groups. +. In OpenNebula, register a system datastore using `DS_MAD=fs_lvm` or `block_lvm`, and set `TM_MAD=ssh`. +. Image datastores can be hosted on NFS or a separate local filesystem — OpenNebula will copy images into LVs at deployment time. + +== NFS Storage + +NetApp exports over NFS can be used for both image and system datastores in OpenNebula. This method is simple to set up, supports `qcow2`-based live snapshots, and works well with incremental backup and contextual files. See link:https://docs.opennebula.io/7.0/product/cluster_configuration/storage_system/nas_ds/[OpenNebula NAS/NFS Datastore] documentation for further details. + +=== Storage Configuration Tasks + +. Enable NFS protocol on the ONTAP SVM. Follow link:https://docs.netapp.com/us-en/ontap/nas-management/index.html[ONTAP 9 NAS Storage Management] for more information. ++ +image::opennebula-ontap-image02.png[NFS storage configuration] +. Create at least two LIFs per controller for performance and failover (optionally using session trunking with NFS v4.1+). +. Create a Volume, and configure an export policy allowing access from all OpenNebula hosts. ++ +image::opennebula-ontap-image06.png[NFS volume configuration] +. Export the volume over NFS using the assigned policy and provide the export path to the virtualization team. + +=== Virtualization Configuration Tasks + +. Mount the NFS export on all OpenNebula hosts in the correct directory (`/var/lib/one/datastores/`). +. Use your platform’s tested NFS v4.x options and specify multiple LIFs for resilience. Avoid `soft` / `intr` for VM datastores. Basic example: `hard,nointr,nfsvers=4.1,sec=sys` +. Register the NFS-backed datastore in OpenNebula with `DS_MAD=fs` and `TM_MAD=qcow2` (for image) or `TM_MAD=shared` (for system). +. `qcow2` images support native KVM snapshots and incremental backup. +. ISO files, kernel/context files, and template overlays can also be stored on NFS datastores for convenience. \ No newline at end of file diff --git a/opennebula/opennebula-overview.adoc b/opennebula/opennebula-overview.adoc new file mode 100644 index 0000000..d355a41 --- /dev/null +++ b/opennebula/opennebula-overview.adoc @@ -0,0 +1,79 @@ +--- +sidebar: sidebar +permalink: opennebula/opennebula-overview.html +keywords: netapp, opennebula, libvirt, kvm, qemu, lxc, vm +summary: +--- += Overview of OpenNebula +:hardbreaks: +:nofooter: +:icons: font +:linkattrs: +:imagesdir: ../media + +[.lead] +OpenNebula is an open source Cloud & Edge Computing platform combining KVM/LXC with advanced features like multi-tenancy, automatic provsion, and resource elasticity. + +== Overview + +OpenNebula is a cloud orchestration platform that can be deployed on-premises, at the edge, or in hybrid and multi-cloud environments. It primarily supports the KVM open-source hypervisor, with additional support for LXC system containers. + +Cloud resources are managed by one or more OpenNebula Front-ends, which execute and interact with various daemons, services, and APIs to provide deployment, orchestration, and monitoring of infrastructure. + +OpenNebula is modular and designed for flexibility. It supports multiple deployment models and integration options, including different database backends, external authentication systems, and accounting platforms. Management can be performed through the link:https://docs.opennebula.io/7.0/product/control_plane_configuration/graphical_user_interface/overview/[Sunstone web interface], the link:https://docs.opennebula.io/7.0/product/operation_references/configuration_references/cli/[command-line interface], or the link:https://docs.opennebula.io/7.0/product/integration_references/system_interfaces/[XML-RPC API], which also has wrappers for Ruby, Python, and other languages. + +image:opennebula-overview-image01.png[OpenNebula Sunstone Dashboard] + +== Cluster Management + +OpenNebula clusters can use NetApp ONTAP storage in a few different ways, depending on your setup and preferences. One approach uses the LVM Thin driver on top of iSCSI LUNs. In this case, OpenNebula hosts are added to a NetApp initiator group, and each host logs in to shared iSCSI targets. The connected LUN is then used as a local LVM volume group, and OpenNebula creates thin-provisioned logical volumes for virtual machines. This setup enables standard OpenNebula features like live migration and high availability, while also benefiting from ONTAP's deduplication, thin provisioning, and snapshot capabilities. + +For environments that want deeper integration, OpenNebula also includes a native NetApp driver that talks directly to the ONTAP API. This method allows OpenNebula to create and manage volumes and LUNs automatically on the storage side. Once created, the LUNs are mapped to the correct initiator groups so they’re immediately available to the right hosts. This setup is ideal for dynamic provisioning and minimizes abstractions and reduces manual setup on the storage system. + +== Compute + +OpenNebula allows you to define compute resources for each virtual machine using templates. You can configure the number of vCPUs, CPU topology (cores and sockets), NUMA node affinity, CPU model, resource usage limits, and attach PCI devices including vGPUs. These settings help control scheduling, performance isolation, and live migration compatibility across the cluster. + +image:opennebula-compute-image01.png[VM CPU settings in Sunstone] + +Memory allocation is also defined in the template, including support for dynamic memory ballooning on compatible guests. CPU and memory settings can be updated during runtime for supported configurations. + +For more details on KVM virtualization and tuning, refer to the link:https://docs.opennebula.io/7.0/product/virtual_machines_operation/virtual_machine_definitions/overview/[VM Management documentation]. + +== Storage + +OpenNebula supports multiple storage backends. In this context we focus on those backed by NetApp: NFS exports, iSCSI LUNs with LVM-thin, and the native NetApp driver that provisions LUNs directly through the ONTAP API. + +A virtual machine’s data is stored across two types of datastores: the **system datastore** which holds ephemeral runtime files and current disks both persistent and non-persistent, and the **image datastore** which contains base images and persistent volumes when not being used. For NetApp-based deployments, these datastores can reside on shared NFS mounts or on LVM-thin volumes layered over iSCSI LUNs. + +When using the LVM-thin method, OpenNebula utilizes a LUN which has LVM metadata setup on it. This setup supports thin provisioning, snapshots, and high performance, with multipath recommended for availability. The NetApp driver, on the other hand, communicates directly with ONTAP to create and manage volumes and LUNs for each VM disk. These are automatically mapped to the correct initiator groups so they’re visible to the right hosts without additional manual steps. + +Other storage solutions are supported by OpenNebula, however these methods are optimized for use with NetApp. The NFS datastore can use either RAW or QCOW2 images, while the LVM and NetApp drivers will utilize RAW since they will be block devices being presented to the virtual machine. + +== Networking + +OpenNebula supports a variety of networking setups, from simple bridged networks to complex virtual switches and software-defined overlays. Network configuration is typically handled at the cluster or host level, with support for VLANs, VXLAN, IP management, and automated address assignment through virtual networks. + +For deployments using NetApp over iSCSI, proper network configuration is critical. Each OpenNebula host must have IP connectivity to the ONTAP iSCSI target LIFs, ideally across multiple paths for redundancy and performance. Multipath I/O (MPIO) should be enabled on each host, and iSCSI sessions should be established to all relevant LIFs in the target SVM. This allows the OS and OpenNebula to access the SAN-backed LUNs reliably. + +For detailed steps on configuring virtual networks in OpenNebula, refer to the link:https://docs.opennebula.io/7.0/networking/overview.html[OpenNebula Networking Guide]. + +== Monitoring + +OpenNebula provides built‑in monitoring through the Sunstone dashboard, showing real-time metrics and status for clusters, hosts, virtual machines, and datastores. This includes CPU and memory usage, disk I/O, and VM lifecycle events. Each host view also includes package and driver version information to help with troubleshooting and lifecycle tracking. + +For external monitoring, OpenNebula supports Prometheus integration out of the box. Prometheus metrics are exported by the OpenNebula daemons, and can be visualized using pre-built Grafana dashboards or customized to fit your needs. This is the recommended method for monitoring large-scale OpenNebula environments. Metrics can also be exported to other time-series databases like InfluxDB or Graphite using hooks and external collectors. + +In NetApp-backed deployments, it’s especially useful to monitor I/O performance, iSCSI session health, and multipath path availability. These metrics are typically gathered from the host OS and exposed through the same Prometheus stack, giving full visibility into both compute and storage layers. + +== Data Protection + +OpenNebula offers full and incremental backup capabilities for VM disks. You can configure backup policies in VM templates or individual VMs using the BACKUP_CONFIG attributes (MODE, KEEP_LAST, FS_FREEZE). Incremental backups are supported on **qcow2** or block backing datastores (e.g. LVM‑thin) and allow space-efficient, repeatable backups. + +When used with NetApp storage: +- **LVM‑thin iSCSI-backed datastores** benefit from underlying ONTAP snapshot and clone capabilities; OpenNebula’s backup mechanism works on top of these logical volumes. +- With the **native NetApp driver**, snapshots and clones can be managed within ONTAP and integrated into backup workflows with lower overhead. + +Restores can be performed in‑place (replacing the VM disk) or via full restore, creating new VM/image objects. OpenNebula’s tools (`onevm restore`, `oneimage`) support selecting increments or doing full rebuilds depending on the saved state. + +For NetApp-centric deployments, we recommend combining OpenNebula incremental backup with ONTAP-level snapshot policies to maximize efficiency and minimize backup time or storage costs. diff --git a/opennebula/sidebar.yml b/opennebula/sidebar.yml new file mode 100644 index 0000000..9d12685 --- /dev/null +++ b/opennebula/sidebar.yml @@ -0,0 +1,10 @@ +section: opennebula +title: "OpenNebula" +url: /opennebula/index.html +entries: + - title: "Overview" + pdf-filename: "Overview of OpenNebula with NetApp" + url: /opennebula/opennebula-overview.html + - title: "NetApp storage for OpenNebula" + pdf-filename: "NetApp storage for OpenNebula" + url: /opennebula/opennebula-ontap.html \ No newline at end of file