Skip to content

Commit a70267f

Browse files
committed
F OpenNebula/one#6753: update docs to latest changes
Signed-off-by: Neal Hansen <[email protected]>
1 parent 634e324 commit a70267f

File tree

1 file changed

+53
-49
lines changed

1 file changed

+53
-49
lines changed

source/open_cluster_deployment/storage_setup/netapp_ds.rst

Lines changed: 53 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -13,59 +13,35 @@ The `NetApp ONTAP documentation <https://docs.netapp.com/us-en/ontap/>`_ may be
1313
NetApp ONTAP Setup
1414
================================================================================
1515

16-
The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each host after API creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
16+
The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each host after creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
1717

1818
1. Define Aggregates/Local Tiers for your Storage VM:
1919

2020
- In ONTAP System Manager: **Storage > Storage VMs > Select your SVM > Edit > Limit volume creation to preferred local tiers**
21-
- Assign at least one aggregate/tier and note their UUID(s) for later use
21+
- Assign at least one aggregate/tier and note their UUID(s) from the URL for later use
2222

2323
2. To enable capacity monitoring:
2424

2525
- Enable *Enable maximum capacity limit* on the same Edit Storage VM screen
2626
- If not configured, set ``DATASTORE_CAPACITY_CHECK=no`` in both of the OpenNebula datastores' attributes
2727

28-
3. No automated snapshot configuration is required - OpenNebula handles this.
28+
3. This driver will manage the snapshots so do not enable any automated snapshots for this SVM, they will not be picked up by OpenNebula automatically unless made through OpenNebula.
29+
30+
4. If you do not plan to use the administrator account, you should create a new user with all API permissions and assign it to the SVM.
2931

3032
Frontend Setup
3133
================================================================================
3234

33-
The frontend requires network access to the NetApp ONTAP API endpoint and proper NFS/iSCSI configuration:
35+
The frontend requires network access to the NetApp ONTAP API endpoint:
3436

3537
1. API Access:
3638

37-
- Ensure network connectivity to the NetApp ONTAP API interface
38-
39-
2. NFS Exports:
40-
41-
- Add to ``/etc/exports`` on the frontend:
42-
43-
- Per-datastore: ``/var/lib/one/datastores/101``
44-
- Shared datastores: ``/var/lib/one/datastores``
45-
46-
.. note:: The frontend only needs to mount System Datastores, **not** Image Datastores.
47-
48-
3. iSCSI Initiators:
49-
50-
- Configure initiator security in NetApp Storage VM:
51-
52-
- **Storage VM > Settings > iSCSI Protocol > Initiator Security**
53-
- Add initiators from ``/etc/iscsi/initiatorname.conf`` (all nodes and frontend)
54-
55-
- Discover and login to the iSCSI targets
56-
57-
- ``iscsiadm -m discovery -t sendtargets -p <target_ip>`` for each iSCSI target
58-
- ``iscsiadm -m node -l`` to login to all discovered targets
59-
60-
4. Persistent iSCSI Configuration:
61-
62-
- Set ``node.startup = automatic`` in ``/etc/iscsi/iscsid.conf``
39+
- Ensure network connectivity to the NetApp ONTAP API interface, the datastore will be in ERROR state if the API is not accessible or the SVM cannot be monitored properly.
6340

64-
65-
Node Setup
41+
Frontend & Node Setup
6642
================================================================================
6743

68-
Configure nodes with persistent iSCSI connections and NFS mounts:
44+
Configure both the frontend and nodes with persistent iSCSI connections:
6945

7046
1. iSCSI Initiators:
7147

@@ -76,14 +52,25 @@ Configure nodes with persistent iSCSI connections and NFS mounts:
7652

7753
- Discover and login to the iSCSI targets
7854

79-
- ``iscsiadm -m discovery -t sendtargets -p <target_ip>`` for each iSCSI target
55+
- ``iscsiadm -m discovery -t sendtargets -p <target_ip>`` for each iSCSI target IP from NetApp
8056
- ``iscsiadm -m node -l`` to login to all discovered targets
8157

8258
2. Persistent iSCSI Configuration:
8359

8460
- Set ``node.startup = automatic`` in ``/etc/iscsi/iscsid.conf``
8561
- Add frontend NFS mounts to ``/etc/fstab``
8662

63+
3. Multipath Configuration:
64+
65+
- Update ``/etc/multipath.conf`` to use something like the following:
66+
67+
.. code-block::
68+
69+
defaults {
70+
user_friendly_names yes
71+
find_multipaths yes
72+
}
73+
8774
OpenNebula Configuration
8875
================================================================================
8976

@@ -122,6 +109,8 @@ Template parameters:
122109
+-----------------------+-------------------------------------------------+
123110
| ``NETAPP_IGROUP`` | Initiator group UUID |
124111
+-----------------------+-------------------------------------------------+
112+
| ``NETAPP_TARGET`` | iSCSI Target name |
113+
+-----------------------+-------------------------------------------------+
125114

126115
Example template:
127116

@@ -139,6 +128,7 @@ Example template:
139128
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
140129
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
141130
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
131+
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
142132
143133
$ onedatastore create netapp_system.ds
144134
ID: 101
@@ -173,6 +163,8 @@ Template parameters:
173163
+-----------------------+-------------------------------------------------+
174164
| ``NETAPP_IGROUP`` | Initiator group UUID |
175165
+-----------------------+-------------------------------------------------+
166+
| ``NETAPP_TARGET`` | iSCSI Target name |
167+
+-----------------------+-------------------------------------------------+
176168

177169
Example template:
178170

@@ -189,6 +181,7 @@ Example template:
189181
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
190182
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
191183
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
184+
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
192185
193186
$ onedatastore create netapp_image.ds
194187
ID: 102
@@ -206,28 +199,39 @@ Storage architecture details:
206199

207200
- **Operations**:
208201

209-
- Non-persistent: FlexClone
202+
- Non-persistent: FlexClone, then split
210203
- Persistent: Rename
211204

212-
Symbolic links from the system datastore will be created for each virtual machine disk by the frontend and shared via NFS with the compute nodes.
213-
214-
.. important:: The system datastore requires a shared filesystem (e.g., NFS mount from frontend to nodes) for device link management and VM metadata distribution.
215-
216-
217-
Additional Configuration
218-
================================================================================
219-
220-
+-----------------------+-------------------------------------------------+
221-
| Attribute | Description |
222-
+=======================+=================================================+
223-
| ``NETAPP_MULTIPATH`` | ``yes`` or ``no``, Default: ``yes`` |
224-
| | Set to ``no`` to disable multipath |
225-
+-----------------------+-------------------------------------------------+
205+
Symbolic links from the system datastore will be created for each virtual machine on it's host once the LUN's have been mapped.
226206

207+
.. note:: The minimum size for a NetApp Volume is 20MB, so any disk smaller than that will result in a 20MB Volume, however the LUN inside will be the correct size.
227208

228209
System Considerations
229210
================================================================================
230211

231212
Occasionally, under network interruptions or if a volume is deleted directly from NetApp, the iSCSI connection may drop or fail. This can cause the system to hang on a ``sync`` command, which in turn may lead to OpenNebula operation failures on the affected host. Although the driver is designed to manage these issues automatically, it's important to be aware of these potential iSCSI connection challenges.
232213

214+
Here are a few tips to get these cleaned up:
215+
216+
- If you just have extra devices from some failures leftover, running ``rescan_scsi_bus.sh -r -m`` may help to remove these devices.
217+
- If you have the entire multipath setup leftover, running ``multipath -f <multipath_device>`` may help to remove these devices, be very careful to run this on the correct multipath device.
218+
233219
.. note:: This behavior stems from the inherent complexities of iSCSI connections and is not exclusive to OpenNebula or NetApp.
220+
221+
If the devices persists then there are some steps to examine the issue:
222+
223+
1. Run ``dmsetup ls --tree`` or ``lsblk`` to see if the mapped devices is still connected to the block devices. You may see devices not attached to a mapper entry in ``lsblk``.
224+
2. For the devices that are not connected in ``lsblk`` that you know are *not your own devices* (e.g. ``/dev/sda`` is often the root device with the OS), run ``echo 1 > /sys/bus/scsi/devices/sdX/device/delete`` where sdX is the device for each of the devices that were involved in the multipath.
225+
3. Once those disk devices are gone, you may have leftover device mapper entries that you can often remove by running ``dmsetup remove /dev/mapper/<device_name>``.
226+
4. If the devices are unable to be removed, you can double check that a process is still not using them with ``fuser -v $(realpath /dev/mapper/<device_name>)``.
227+
228+
- If you see the kernel is using it as swap, you can remove it by running ``swapoff /dev/mapper/<device_name>`` and then ``dmsetup remove /dev/mapper/<device_name>``.
229+
- If you see another process using it, examine and if necessary kill the process and the run ``dmsetup remove /dev/mapper/<device_name>``
230+
- If you are unable to kill the process or there is nothing visibly using the mapper entry, then do the following commands:
231+
232+
1. Run ``dmsetup suspend /dev/mapper/<device_name>``
233+
2. Run ``dmsetup wipe_table /dev/mapper/<device_name>``
234+
3. Run ``dmsetup resume /dev/mapper/<device_name>``
235+
4. Run ``dmsetup remove /dev/mapper/<device_name>``
236+
237+
This should take care of most of the I/O lockups you may see due to some failures. Please contact OpenNebula Support team if you need additional assistance with this.

0 commit comments

Comments
 (0)