Skip to content

Commit d102fbd

Browse files
committed
F OpenNebula/engineering#333: purestorage docs
Signed-off-by: Neal Hansen <[email protected]>
1 parent d49a5dc commit d102fbd

File tree

1 file changed

+308
-0
lines changed

1 file changed

+308
-0
lines changed
Lines changed: 308 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,308 @@
1+
---
2+
title: "PureStorage FlashArray SAN Datastore (EE)"
3+
linkTitle: "PureStorage FlashArray - Native (EE)"
4+
weight: "6"
5+
---
6+
7+
OpenNebula’s **Pure Storage FlashArray SAN Datastore** delivers production-grade, native control of FlashArray block storage, from provisioning through cleanup, directly from OpenNebula. This integration exposes the full lifecycle of FlashArray Volumes, Snapshots, and Clones, and automates host connectivity via Pure’s host/host-group model with reliable iSCSI and multipath handling. All communication with the array uses authenticated HTTPS against the FlashArray REST API. This datastore driver is part of OpenNebula Enterprise Edition (EE).
8+
9+
### Key Benefits
10+
11+
With the native Pure driver, OpenNebula users gain the performance consistency of FlashArray’s always-thin, metadata-driven architecture. Pure’s zero-copy snapshots and clones complete instantly, without impacting write amplification or introducing snapshot-tree latency penalties typical of host-side copy-on-write systems. Under mixed 4k/8k and fsync-heavy workloads, FlashArray maintains flat latency profiles even with deep snapshot histories, while LVM-thin commonly exhibits early degradation as CoW pressure increases. The result is higher, steadier IOPS and predictable latency for virtual machine disks at scale.
12+
13+
| Area | Benefit | Description |
14+
|------|----------|--------------|
15+
| **Automation** | Full lifecycle control | End-to-end creation, cloning, resizing, renaming, and deletion of FlashArray volumes directly from OpenNebula. |
16+
| **Efficiency** | Instant, thin snapshots and clones | Pure’s metadata-only snapshots allow immediate, zero-copy cloning for persistent and non-persistent VMs alike. |
17+
| **Performance** | Latency-stable I/O path | FlashArray’s architecture keeps read/write latency flat even as snapshot chains grow; multipath iSCSI is configured automatically per host. |
18+
| **Reliability** | Synchronous REST orchestration | Operations use FlashArray’s synchronous REST API with explicit error handling and safe sequencing for volume, snapshot, and host mapping tasks. |
19+
| **Data Protection** | Incremental SAN-snapshot backups | Block-level incremental backups are generated by comparing FlashArray snapshot pairs via raw device attachment — no guest agents required. |
20+
| **Security** | HTTPS control path | All FlashArray communication uses authenticated, encrypted HTTPS REST calls. |
21+
| **Scalability** | Simplified host-group mappings | Safe concurrent attach/detach operations across hosts using deterministic LUN IDs and predictable multipath layout. |
22+
23+
### Supported NetApp Native Functionality
24+
25+
| NetApp Feature | Supported | Notes |
26+
|----------------|------------|-------|
27+
| **Zero-Copy Volume Clone** | Yes | Pure clones are metadata-only and complete instantly |
28+
| **Snapshot (manual)** | Yes | Created and deleted directly from OpenNebula; mapped 1:1 to FlashArray snapshots. |
29+
| **Snapshot restore** | Yes | Volume overwrite-from-snapshot supported via the REST API. |
30+
| **Snapshot retention/policies** | No | FlashArray snapshot schedules exist, but OpenNebula does not manage array-side policies; all snapshots remain under OpenNebula’s control. |
31+
| **Incremental backups (SAN snapshot diff)** | Yes | Utilizes FlashArray Volume Diff API to gather block differences, then copies the data. |
32+
| **Host Management** | Yes | Hosts are automatically created and mapped to as needed. |
33+
| **Multipath I/O** | Yes | Fully orchestrated; automatic detection, resize, and removal of maps. |
34+
| **Data encryption (at-rest)** | Yes | Supported transparently by the array (always-on AES-XTS); not managed by OpenNebula. |
35+
| **SnapMirror replication** | No (planned) | Not yet supported; may be added in future roadmap. |
36+
| **QoS policy groups** | No | Not currently exposed through the datastore driver. |
37+
| **SVM DR / MetroCluster** | No | Supported by FlashArray, but not orchestrated by OpenNebula. |
38+
39+
40+
## Limitations and Unsupported Features
41+
42+
While the Pure Storage FlashArray integration delivers full VM disk lifecycle management and the core SAN operations required by OpenNebula, it is deliberately scoped to **primary datastore provisioning** via **iSCSI block devices.**
43+
Several advanced FlashArray protection and VMware-specific capabilities are intentionally not surfaced through this driver.
44+
45+
{{< alert title="Important" color="warning" >}}
46+
This integration targets block-level provisioning for OpenNebula environments.
47+
It does not expose replication, asynchronous protection groups, or VMware-exclusive workflows (e.g., vVols or VAAI primitives).
48+
{{< /alert >}}
49+
50+
| Category | Unsupported Feature | Rationale / Alternative |
51+
|-----------|--------------------|--------------------------|
52+
| **Replication & DR** | Protection Groups / Active Cluster | Planned for future releases; can be managed externally on the FlashArray. |
53+
| **NAS protocols** | NFS / SMB | Driver focuses on iSCSI block storage only. |
54+
| **Array-managed automatic snapshots** | Automated snapshot schedules | OpenNebula requires full control over snapshot lifecycle; array policies must remain disabled for OpenNebula-managed volumes. |
55+
| **Storage QoS / Performance tiers** | Bandwidth / IOPS limits | FlashArray supports QoS, but these controls are not integrated into the driver. |
56+
| **Storage efficiency analytics** | Deduplication & compression metrics | Calculated internally by FlashArray; not displayed or consumed by OpenNebula. |
57+
| **Encryption management** | Per-volume encryption toggling | FlashArray encryption is always-on and appliance-managed; no OpenNebula API exposure. |
58+
| **Advanced VMware features** | VAAI offloads, Storage DRS, vVols | VMware-specific APIs, not applicable to OpenNebula. |
59+
| **Multi-instance sharing** | Shared datastore IDs | Not supported; each OpenNebula instance must own its datastore definitions uniquely. Utilize suffixes for multi-OpenNebula Arrays |
60+
| **Synchronous Replication Topologies** | ActiveCluster stretch, pod failover | May be deployed at the array infrastructure level but is not orchestrated by OpenNebula. |
61+
62+
63+
## PureStorage FlashArray Setup
64+
65+
OpenNebula runs the set of datastore and transfer manager driver to register an existing PureStorage FlashArray SAN. This set utilizes the PureStorage FlashArray API to create volumes which are treated as a Virtual Machine disk utilizing the iSCSI interface. Both the Image and System datastores must use the same PureStorage array and indentical datastore configurations. This is because volumes are either clones or renamed depending on the image persistence type. Persistent images are renamed to the System datastore, while non-persistent images are cloned using FlexClone.
66+
67+
The [PureStorage Linux documentation](https://support.purestorage.com/bundle/m_linux/page/Solutions/Linux/topics/concept/c_installing_and_configuring.html) and this [PureStorage iSCSI Setup with FlashArray Blog Post](https://blog.purestorage.com/purely-technical/iscsi-setup-with-flasharray/) may be useful during this setup.
68+
69+
1. **Verify iSCSI Service Connections**
70+
- In the FlashArray System Manager: **Settings -> Network -> Connectors**
71+
- Ensure the iSCSI connectors are enabled and note their IP addresses.
72+
73+
2. **Create an API User**
74+
- In the FlashArray System Manager: **Settings -> Access -> Users and Policies**
75+
- Create a new user with the Storage Admin role, this should provide enough permissions for OpenNebula.
76+
- Create an API token for this user and note the API key. Leave the expiration date blank to create an indefinite API key.
77+
78+
79+
## Front-end Only Setup
80+
81+
The Front-end requires network access to the PureStorage FlashArrayAPI endpoint:
82+
83+
1. **API Access:**
84+
- Ensure network connectivity to the PureStorage FlashArray API interface. The datastore will be in an ERROR state if the API is not accessible or cannot be monitored properly.
85+
86+
87+
## Front-end & Node Setup
88+
89+
Configure both the Front-end and nodes with persistent iSCSI connections and multipath configuration as described by the [NetApp ONTAP Documentation - SAN Host Utilities Overview](https://docs.netapp.com/us-en/ontap-sanhost/hu_fcp_scsi_index.html):
90+
91+
1. **iSCSI:**
92+
- Discover the iSCSI targets on the hosts:
93+
~~~bash
94+
iscsiadm -m discovery -t sendtargets -p <target_ip> # for each iSCSI target IP from NetApp
95+
~~~
96+
97+
2. **Persistent iSCSI Configuration:**
98+
- Set `node.startup = automatic` in `/etc/iscsi/iscsid.conf`
99+
- Ensure iscsid is started with `systemctl status iscsid`
100+
- Enable iscsid with `systemctl enable iscsid`
101+
102+
3. **Multipath Configuration:**
103+
Update `/etc/multipath.conf` to something like:
104+
~~~text
105+
defaults {
106+
polling_interval 10
107+
}
108+
109+
110+
devices {
111+
device {
112+
vendor "NVME"
113+
product "Pure Storage FlashArray"
114+
path_selector "queue-length 0"
115+
path_grouping_policy group_by_prio
116+
prio ana
117+
failback immediate
118+
fast_io_fail_tmo 10
119+
user_friendly_names no
120+
no_path_retry 0
121+
features 0
122+
dev_loss_tmo 60
123+
}
124+
device {
125+
vendor "PURE"
126+
product "FlashArray"
127+
path_selector "service-time 0"
128+
hardware_handler "1 alua"
129+
path_grouping_policy group_by_prio
130+
prio alua
131+
failback immediate
132+
path_checker tur
133+
fast_io_fail_tmo 10
134+
user_friendly_names no
135+
no_path_retry 0
136+
features 0
137+
dev_loss_tmo 600
138+
}
139+
}
140+
~~~
141+
142+
## OpenNebula Configuration
143+
144+
Create both datastores as PureFA (PureStorage FlashArray) (instant cloning/moving capabilities):
145+
146+
- **System Datastore**
147+
- **Image Datastore**
148+
149+
### Create System Datastore
150+
151+
**Template required parameters:**
152+
153+
| Attribute | Description |
154+
| --------------------- | ------------------------------------------------ |
155+
| `NAME` | Datastore name |
156+
| `TYPE` | `SYSTEM_DS` |
157+
| `TM_MAD` | `purefa` |
158+
| `DISK_TYPE` | `BLOCK` |
159+
| `PUREFA_HOST` | PureStorage FlashArray API IP address |
160+
| `PUREFA_API_TOKEN` | API Token key |
161+
| `PUREFA_TARGET` | iSCSI Target name |
162+
163+
**Example template:**
164+
165+
~~~shell
166+
$ cat purefa_system.ds
167+
NAME = "purefa_system"
168+
TYPE = "SYSTEM_DS"
169+
DISK_TYPE = "BLOCK"
170+
TM_MAD = "purefa"
171+
PUREFA_HOST = "10.1.234.56"
172+
PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef"
173+
PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234"
174+
175+
176+
$ onedatastore create purefa_system.ds
177+
ID: 101
178+
~~~
179+
180+
### Create Image Datastore
181+
182+
**Template required parameters:**
183+
184+
| Attribute | Description |
185+
| ------------------- | ----------------------------------------------- |
186+
| `NAME` | Datastore name |
187+
| `TYPE` | `IMAGE_DS` |
188+
| `DS_MAD` | `purefa` |
189+
| `TM_MAD` | `purefa` |
190+
| `DISK_TYPE` | `BLOCK` |
191+
| `PUREFA_HOST` | PureStorage FlashArray API IP address |
192+
| `PUREFA_API_TOKEN` | API Token key |
193+
| `PUREFA_TARGET` | iSCSI Target name |
194+
195+
**Example template:**
196+
~~~shell
197+
$ cat purefa_image.ds
198+
NAME = "purefa_image"
199+
TYPE = "IMAGE_DS"
200+
DISK_TYPE = "BLOCK"
201+
DS_MAD = "purefa"
202+
TM_MAD = "purefa"
203+
PUREFA_HOST = "10.1.234.56"
204+
PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef"
205+
PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234"
206+
207+
$ onedatastore create purefa_image.ds
208+
ID: 102
209+
~~~
210+
211+
### Datastore Optional Attributes
212+
213+
**Template optional parameters:**
214+
215+
| Attribute | Description |
216+
| ------------------------- | ----------------------------------------------------- |
217+
| `PUREFA_VERSION` | PureStorage FlashArray Version (Default: 2.9) |
218+
| `PUREFA_SUFFIX` | Suffix to append to all volume names |
219+
220+
## Datastore Internals
221+
222+
**Storage architecture details:**
223+
224+
- **Images:** Stored as a single Volume in PureStorage FlashArray
225+
- **Naming Convention:**
226+
- Image datastore: `one_<datastore_id>_<image_id>`
227+
- System datastore: `one_<vm_id>_disk_<disk_id>`
228+
- **Operations:**
229+
- Non‐persistent: Clone
230+
- Persistent: Rename
231+
232+
Hosts are automatically created in PureStorage using the PureStorage FlashArray API, with a name generated from their hostname.
233+
234+
{{< alert title="Warning" color="warning" >}}
235+
Do NOT change the hostname of your hosts unless you have 0 VM's deployed to that host
236+
{{< /alert >}}
237+
238+
Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the Volumes have been mapped.
239+
240+
**Backups process details:**
241+
242+
Both Full and Incremental backups are supported by PureStorage FlashArray. For Full Backups, a snapshot of the Volume containing the VM disk is taken and attached to the host, where it is converted into a qcow2 image and uploaded to the backup datastore.
243+
244+
Incremental backups are created using the Volume Difference Feature of PureStorage FlashArray. This returns a list of block offsets and lengths which have changed since a target snapshot. This list is then used to create a sparse QCOW2 format file which is uploaded to the backup datastore.
245+
246+
{{< alert title="Note" color="success" >}}
247+
You can configure the block size ( Default and minimum 4096 B / 4 KB ) for incremental backups by modifing the file at `/var/tmp/one/etc/tm/san/backup.conf`
248+
{{< /alert >}}
249+
250+
{{< alert title="Warning" color="warning" >}}
251+
The incremental backup feature of PureStorage FlashArray requires the `nbd` kernel module to be loaded and the `nbdfuse` package to be installed on all OpenNebula nodes.
252+
{{< /alert >}}
253+
254+
## System Considerations
255+
256+
Occasionally, under network interruptions or if a volume is deleted directly from PureStorage, the iSCSI connection may drop or fail. This can cause the system to hang on a `sync` command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges.
257+
258+
You may wish to contact the OpenNebula Support team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so:
259+
260+
- If you have extra devices from failures leftover, run:
261+
~~~bash
262+
rescan_scsi_bus.sh -r -m
263+
~~~
264+
- If an entire multipath setup remains, run:
265+
~~~bash
266+
multipath -f <multipath_device>
267+
~~~
268+
*Be very careful to target the correct multipath device.*
269+
270+
{{< alert title="Note" color="success" >}}
271+
This behavior stems from the inherent complexities of iSCSI connections and is not exclusive to OpenNebula or PureStorage.
272+
{{< /alert >}}
273+
274+
If devices persist, follow these steps:
275+
276+
1. Run `dmsetup ls --tree` or `lsblk` to see which mapped devices remain. You may see devices not attached to a mapper entry in `lsblk`.
277+
2. For each such device (not your root device), run:
278+
~~~bash
279+
echo 1 > /sys/bus/scsi/devices/sdX/device/delete
280+
~~~
281+
where `sdX` is the device name.
282+
3. Once those devices are gone, remove leftover mapper entries:
283+
~~~bash
284+
dmsetup remove /dev/mapper/<device_name>
285+
~~~
286+
4. If removal fails:
287+
- Check usage:
288+
~~~bash
289+
fuser -v $(realpath /dev/mapper/<device_name>)
290+
~~~
291+
- If it’s being used as swap:
292+
~~~bash
293+
swapoff /dev/mapper/<device_name>
294+
dmsetup remove /dev/mapper/<device_name>
295+
~~~
296+
- If another process holds it, kill the process and retry:
297+
~~~bash
298+
dmsetup remove /dev/mapper/<device_name>
299+
~~~
300+
- If you can’t kill the process or nothing shows up:
301+
~~~bash
302+
dmsetup suspend /dev/mapper/<device_name>
303+
dmsetup wipe_table /dev/mapper/<device_name>
304+
dmsetup resume /dev/mapper/<device_name>
305+
dmsetup remove /dev/mapper/<device_name>
306+
~~~
307+
308+
This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support team if you need assistance.

0 commit comments

Comments
 (0)