You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each host after API creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
16
+
The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each host after creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
17
17
18
18
1. Define Aggregates/Local Tiers for your Storage VM:
19
19
20
20
- In ONTAP System Manager: **Storage > Storage VMs > Select your SVM > Edit > Limit volume creation to preferred local tiers**
21
-
- Assign at least one aggregate/tier and note their UUID(s) for later use
21
+
- Assign at least one aggregate/tier and note their UUID(s) from the URL for later use
22
22
23
23
2. To enable capacity monitoring:
24
24
25
25
- Enable *Enable maximum capacity limit* on the same Edit Storage VM screen
26
26
- If not configured, set ``DATASTORE_CAPACITY_CHECK=no`` in both of the OpenNebula datastores' attributes
27
27
28
-
3. No automated snapshot configuration is required - OpenNebula handles this.
28
+
3. This driver will manage the snapshots so do not enable any automated snapshots for this SVM, they will not be picked up by OpenNebula automatically unless made through OpenNebula.
29
+
30
+
4. If you do not plan to use the administrator account, you should create a new user with all API permissions and assign it to the SVM.
- Add initiators from ``/etc/iscsi/initiatorname.conf`` (all nodes and frontend)
54
-
55
-
- Discover and login to the iSCSI targets
56
-
57
-
- ``iscsiadm -m discovery -t sendtargets -p <target_ip>`` for each iSCSI target
58
-
- ``iscsiadm -m node -l`` to login to all discovered targets
59
-
60
-
4. Persistent iSCSI Configuration:
61
-
62
-
- Set ``node.startup = automatic`` in ``/etc/iscsi/iscsid.conf``
39
+
- Ensure network connectivity to the NetApp ONTAP API interface, the datastore will be in ERROR state if the API is not accessible or the SVM cannot be monitored properly.
Symbolic links from the system datastore will be created for each virtual machine disk by the frontend and shared via NFS with the compute nodes.
213
-
214
-
.. important:: The system datastore requires a shared filesystem (e.g., NFS mount from frontend to nodes) for device link management and VM metadata distribution.
Symbolic links from the system datastore will be created for each virtual machine on it's host once the LUN's have been mapped.
226
206
207
+
.. note:: The minimum size for a NetApp Volume is 20MB, so any disk smaller than that will result in a 20MB Volume, however the LUN inside will be the correct size.
Occasionally, under network interruptions or if a volume is deleted directly from NetApp, the iSCSI connection may drop or fail. This can cause the system to hang on a ``sync`` command, which in turn may lead to OpenNebula operation failures on the affected host. Although the driver is designed to manage these issues automatically, it's important to be aware of these potential iSCSI connection challenges.
232
213
214
+
Here are a few tips to get these cleaned up:
215
+
216
+
- If you just have extra devices from some failures leftover, running ``rescan_scsi_bus.sh -r -m`` may help to remove these devices.
217
+
- If you have the entire multipath setup leftover, running ``multipath -f <multipath_device>`` may help to remove these devices, be very careful to run this on the correct multipath device.
218
+
233
219
.. note:: This behavior stems from the inherent complexities of iSCSI connections and is not exclusive to OpenNebula or NetApp.
220
+
221
+
If the devices persists then there are some steps to examine the issue:
222
+
223
+
1. Run ``dmsetup ls --tree`` or ``lsblk`` to see if the mapped devices is still connected to the block devices. You may see devices not attached to a mapper entry in ``lsblk``.
224
+
2. For the devices that are not connected in ``lsblk`` that you know are *not your own devices* (e.g. ``/dev/sda`` is often the root device with the OS), run ``echo 1 > /sys/bus/scsi/devices/sdX/device/delete`` where sdX is the device for each of the devices that were involved in the multipath.
225
+
3. Once those disk devices are gone, you may have leftover device mapper entries that you can often remove by running ``dmsetup remove /dev/mapper/<device_name>``.
226
+
4. If the devices are unable to be removed, you can double check that a process is still not using them with ``fuser -v $(realpath /dev/mapper/<device_name>)``.
227
+
228
+
- If you see the kernel is using it as swap, you can remove it by running ``swapoff /dev/mapper/<device_name>`` and then ``dmsetup remove /dev/mapper/<device_name>``.
229
+
- If you see another process using it, examine and if necessary kill the process and the run ``dmsetup remove /dev/mapper/<device_name>``
230
+
- If you are unable to kill the process or there is nothing visibly using the mapper entry, then do the following commands:
231
+
232
+
1. Run ``dmsetup suspend /dev/mapper/<device_name>``
233
+
2. Run ``dmsetup wipe_table /dev/mapper/<device_name>``
234
+
3. Run ``dmsetup resume /dev/mapper/<device_name>``
235
+
4. Run ``dmsetup remove /dev/mapper/<device_name>``
236
+
237
+
This should take care of most of the I/O lockups you may see due to some failures. Please contact OpenNebula Support team if you need additional assistance with this.
0 commit comments