Skip to content

Commit 9f86c35

Browse files
Orit Wassermanidryomovzdover23
committed
doc: Add NVMe-oF gateway documentation
- Add nvmeof-initiator-esx.rst - Add nvmeof-initiator-linux.rst - Add nvmeof-initiators.rst - Add nvmeof-overview.rst - Add nvmeof-requirements.rst - Add nvmeof-target-configure.rst - Add links to rbd-integrations.rst Co-authored-by: Ilya Dryomov <[email protected]> Co-authored-by: Zac Dover <[email protected]> Signed-off-by: Orit Wasserman <[email protected]>
1 parent ed736fa commit 9f86c35

File tree

7 files changed

+354
-0
lines changed

7 files changed

+354
-0
lines changed

doc/rbd/nvmeof-initiator-esx.rst

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
---------------------------------
2+
NVMe/TCP Initiator for VMware ESX
3+
---------------------------------
4+
5+
Prerequisites
6+
=============
7+
8+
- A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.
9+
- Deployed Ceph NVMe-oF gateway.
10+
- Ceph cluster with NVMe-oF configuration.
11+
- Subsystem defined in the gateway.
12+
13+
Configuration
14+
=============
15+
16+
The following instructions will use the default vSphere web client and esxcli.
17+
18+
1. Enable NVMe/TCP on a NIC:
19+
20+
.. prompt:: bash #
21+
22+
esxcli nvme fabric enable --protocol TCP --device vmnicN
23+
24+
Replace ``N`` with the number of the NIC.
25+
26+
2. Tag a VMKernel NIC to permit NVMe/TCP traffic:
27+
28+
.. prompt:: bash #
29+
30+
esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP
31+
32+
Replace ``N`` with the ID of the VMkernel.
33+
34+
3. Configure the VMware ESXi host for NVMe/TCP:
35+
36+
#. List the NVMe-oF adapter:
37+
38+
.. prompt:: bash #
39+
40+
esxcli nvme adapter list
41+
42+
#. Discover NVMe-oF subsystems:
43+
44+
.. prompt:: bash #
45+
46+
esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
47+
48+
#. Connect to NVME-oF gateway subsystem:
49+
50+
.. prompt:: bash #
51+
52+
esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
53+
54+
#. List the NVMe/TCP controllers:
55+
56+
.. prompt:: bash #
57+
58+
esxcli nvme controller list
59+
60+
#. List the NVMe-oF namespaces in the subsystem:
61+
62+
.. prompt:: bash #
63+
64+
esxcli nvme namespace list
65+
66+
4. Verify that the initiator has been set up correctly:
67+
68+
#. From the vSphere client go to the ESXi host.
69+
#. On the Storage page go to the Devices tab.
70+
#. Verify that the NVME/TCP disks are listed in the table.

doc/rbd/nvmeof-initiator-linux.rst

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
==============================
2+
NVMe/TCP Initiator for Linux
3+
==============================
4+
5+
Prerequisites
6+
=============
7+
8+
- Kernel 5.0 or later
9+
- RHEL 9.2 or later
10+
- Ubuntu 24.04 or later
11+
- SLES 15 SP3 or later
12+
13+
Installation
14+
============
15+
16+
1. Install the nvme-cli:
17+
18+
.. prompt:: bash #
19+
20+
yum install nvme-cli
21+
22+
2. Load the NVMe-oF module:
23+
24+
.. prompt:: bash #
25+
26+
modprobe nvme-fabrics
27+
28+
3. Verify the NVMe/TCP target is reachable:
29+
30+
.. prompt:: bash #
31+
32+
nvme discover -t tcp -a GATEWAY_IP -s 4420
33+
34+
4. Connect to the NVMe/TCP target:
35+
36+
.. prompt:: bash #
37+
38+
nvme connect -t tcp -a GATEWAY_IP -n SUBSYSTEM_NQN
39+
40+
Next steps
41+
==========
42+
43+
Verify that the initiator is set up correctly:
44+
45+
1. List the NVMe block devices:
46+
47+
.. prompt:: bash #
48+
49+
nvme list
50+
51+
2. Create a filesystem on the desired device:
52+
53+
.. prompt:: bash #
54+
55+
mkfs.ext4 NVME_NODE_PATH
56+
57+
3. Mount the filesystem:
58+
59+
.. prompt:: bash #
60+
61+
mkdir /mnt/nvmeof
62+
63+
.. prompt:: bash #
64+
65+
mount NVME_NODE_PATH /mnt/nvmeof
66+
67+
4. List the NVME-oF files:
68+
69+
.. prompt:: bash #
70+
71+
ls /mnt/nvmeof
72+
73+
5. Create a text file in the ``/mnt/nvmeof`` directory:
74+
75+
.. prompt:: bash #
76+
77+
echo "Hello NVME-oF" > /mnt/nvmeof/hello.text
78+
79+
6. Verify that the file can be accessed:
80+
81+
.. prompt:: bash #
82+
83+
cat /mnt/nvmeof/hello.text

doc/rbd/nvmeof-initiators.rst

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
.. _configuring-the-nvmeof-initiators:
2+
3+
====================================
4+
Configuring the NVMe-oF Initiators
5+
====================================
6+
7+
- `NVMe/TCP Initiator for Linux <../nvmeof-initiator-linux>`_
8+
9+
- `NVMe/TCP Initiator for VMware ESX <../nvmeof-initiator-esx>`_
10+
11+
.. toctree::
12+
:maxdepth: 1
13+
:hidden:
14+
15+
Linux <nvmeof-initiator-linux>
16+
VMware ESX <nvmeof-initiator-esx>

doc/rbd/nvmeof-overview.rst

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
.. _ceph-nvmeof:
2+
3+
======================
4+
Ceph NVMe-oF Gateway
5+
======================
6+
7+
The NVMe-oF Gateway presents an NVMe-oF target that exports
8+
RADOS Block Device (RBD) images as NVMe namespaces. The NVMe-oF protocol allows
9+
clients (initiators) to send NVMe commands to storage devices (targets) over a
10+
TCP/IP network, enabling clients without native Ceph client support to access
11+
Ceph block storage.
12+
13+
Each NVMe-oF gateway consists of an `SPDK <https://spdk.io/>`_ NVMe-oF target
14+
with ``bdev_rbd`` and a control daemon. Ceph’s NVMe-oF gateway can be used to
15+
provision a fully integrated block-storage infrastructure with all the features
16+
and benefits of a conventional Storage Area Network (SAN).
17+
18+
.. ditaa::
19+
Cluster Network (optional)
20+
+-------------------------------------------+
21+
| | | |
22+
+-------+ +-------+ +-------+ +-------+
23+
| | | | | | | |
24+
| OSD 1 | | OSD 2 | | OSD 3 | | OSD N |
25+
| {s}| | {s}| | {s}| | {s}|
26+
+-------+ +-------+ +-------+ +-------+
27+
| | | |
28+
+--------->| | +---------+ | |<----------+
29+
: | | | RBD | | | :
30+
| +----------------| Image |----------------+ |
31+
| Public Network | {d} | |
32+
| +---------+ |
33+
| |
34+
| +--------------------+ |
35+
| +--------------+ | NVMeoF Initiators | +--------------+ |
36+
| | NVMe‐oF GW | | +-----------+ | | NVMe‐oF GW | |
37+
+-->| RBD Module |<--+ | Various | +-->| RBD Module |<--+
38+
| | | | Operating | | | |
39+
+--------------+ | | Systems | | +--------------+
40+
| +-----------+ |
41+
+--------------------+
42+
43+
.. toctree::
44+
:maxdepth: 1
45+
46+
Requirements <nvmeof-requirements>
47+
Configuring the NVME-oF Target <nvmeof-target-configure>
48+
Configuring the NVMe-oF Initiators <nvmeof-initiators>

doc/rbd/nvmeof-requirements.rst

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
============================
2+
NVME-oF Gateway Requirements
3+
============================
4+
5+
We recommend that you provision at least two NVMe/TCP gateways on different
6+
nodes to implement a highly-available Ceph NVMe/TCP solution.
7+
8+
We recommend at a minimum a single 10Gb Ethernet link in the Ceph public
9+
network for the gateway. For hardware recommendations, see
10+
:ref:`hardware-recommendations` .
11+
12+
.. note:: On the NVMe-oF gateway, the memory footprint is a function of the
13+
number of mapped RBD images and can grow to be large. Plan memory
14+
requirements accordingly based on the number RBD images to be mapped.
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
==========================================
2+
Installing and Configuring NVMe-oF Targets
3+
==========================================
4+
5+
Traditionally, block-level access to a Ceph storage cluster has been limited to
6+
(1) QEMU and ``librbd`` (which is a key enabler for adoption within OpenStack
7+
environments), and (2) the Linux kernel client. Starting with the Ceph Reef
8+
release, block-level access has been expanded to offer standard NVMe/TCP
9+
support, allowing wider platform usage and potentially opening new use cases.
10+
11+
Prerequisites
12+
=============
13+
14+
- Red Hat Enterprise Linux/CentOS 8.0 (or newer); Linux kernel v4.16 (or newer)
15+
16+
- A working Ceph Reef or later storage cluster, deployed with ``cephadm``
17+
18+
- NVMe-oF gateways, which can either be colocated with OSD nodes or on dedicated nodes
19+
20+
- Separate network subnets for NVME-oF front-end traffic and Ceph back-end traffic
21+
22+
Explanation
23+
===========
24+
25+
The Ceph NVMe-oF gateway is both an NVMe-oF target and a Ceph client. Think of
26+
it as a "translator" between Ceph's RBD interface and the NVME-oF protocol. The
27+
Ceph NVMe-oF gateway can run on a standalone node or be colocated with other
28+
daemons, for example on a Ceph Object Store Disk (OSD) node. When colocating
29+
the Ceph NVMe-oF gateway with other daemons, ensure that sufficient CPU and
30+
memory are available. The steps below explain how to install and configure the
31+
Ceph NVMe/TCP gateway for basic operation.
32+
33+
34+
Installation
35+
============
36+
37+
Complete the following steps to install the Ceph NVME-oF gateway:
38+
39+
#. Create a pool in which the gateways configuration can be managed:
40+
41+
.. prompt:: bash #
42+
43+
ceph osd pool create NVME-OF_POOL_NAME
44+
45+
#. Enable RBD on the NVMe-oF pool:
46+
47+
.. prompt:: bash #
48+
49+
rbd pool init NVME-OF_POOL_NAME
50+
51+
#. Deploy the NVMe-oF gateway daemons on a specific set of nodes:
52+
53+
.. prompt:: bash #
54+
55+
ceph orch apply nvmeof NVME-OF_POOL_NAME --placment="host01, host02"
56+
57+
Configuration
58+
=============
59+
60+
Download the ``nvmeof-cli`` container before first use.
61+
To download it use the following command:
62+
63+
.. prompt:: bash #
64+
65+
podman pull quay.io/ceph/nvmeof-cli:latest
66+
67+
#. Create an NVMe subsystem:
68+
69+
.. prompt:: bash #
70+
71+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem add --subsystem SUSYSTEM_NQN
72+
73+
The subsystem NQN is a user defined string, for example ``nqn.2016-06.io.spdk:cnode1``.
74+
75+
#. Define the IP port on the gateway that will process the NVME/TCP commands and I/O:
76+
77+
a. On the install node, get the NVME-oF Gateway name:
78+
79+
.. prompt:: bash #
80+
81+
ceph orch ps | grep nvme
82+
83+
b. Define the IP port for the gateway:
84+
85+
.. prompt:: bash #
86+
87+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 listener add --subsystem SUBSYSTEM_NQN --gateway-name GATEWAY_NAME --traddr GATEWAY_IP --trsvcid 4420
88+
89+
#. Get the host NQN (NVME Qualified Name) for each host:
90+
91+
.. prompt:: bash #
92+
93+
cat /etc/nvme/hostnqn
94+
95+
.. prompt:: bash #
96+
97+
esxcli nvme info get
98+
99+
#. Allow the initiator host to connect to the newly-created NVMe subsystem:
100+
101+
.. prompt:: bash #
102+
103+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 host add --subsystem SUBSYSTEM_NQN --host "HOST_NQN1, HOST_NQN2"
104+
105+
#. List all subsystems configured in the gateway:
106+
107+
.. prompt:: bash #
108+
109+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem list
110+
111+
#. Create a new NVMe namespace:
112+
113+
.. prompt:: bash #
114+
115+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace add --subsystem SUBSYSTEM_NQN --rbd-pool POOL_NAME --rbd-image IMAGE_NAME
116+
117+
#. List all namespaces in the subsystem:
118+
119+
.. prompt:: bash #
120+
121+
podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace list --subsystem SUBSYSTEM_NQN
122+

doc/rbd/rbd-integrations.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,4 @@
1414
CloudStack <rbd-cloudstack>
1515
LIO iSCSI Gateway <iscsi-overview>
1616
Windows <rbd-windows>
17+
NVMe-oF Gateway <nvmeof-overview>

0 commit comments

Comments
 (0)