|
2 | 2 | NVMe-oF Gateway Requirements |
3 | 3 | ============================ |
4 | 4 |
|
5 | | -- At least 8 GB of RAM dedicated to each gateway instance |
6 | | -- It is hightly recommended to dedicate at least four CPU threads / vcores to each |
7 | | - gateway. One can work but performance may be below expectations. It is |
8 | | - ideal to dedicate servers to NVMe-oF gateway service so that these |
9 | | - and other Ceph services do not degrade each other. |
10 | | -- At minimum a 10 Gb/s network link to the Ceph public network. For best |
11 | | - latency and throughput we recommend 25 Gb/s or 100 Gb/s links. |
12 | | -- Bonding of network links, with an appropriate xmit hash policy, is ideal |
13 | | - for high availability. Note that the throughput of a given NVMe-oF client |
14 | | - can be no higher than that of a single link within a bond. Thus, if four |
15 | | - 10 Gb/s links are bonded together on gateway nodes, no one client will |
16 | | - realize more than 10 Gb/s throughput. Moreover, remember that Ceph |
17 | | - NVMe-oF gateways also communicate with backing OSDs over the public |
18 | | - network at the same time, which contends with traffic between clients |
19 | | - and gateways. Provision networking generously to avoid congestion and |
20 | | - saturation. |
21 | | -- Provision at least two NVMe-oF gateways in a gateway group, on separate |
22 | | - Ceph cluster nodes, for a highly-availability Ceph NVMe/TCP solution. |
| 5 | +- At least 8 GB of RAM dedicated to each NVME-oF gateway instance |
| 6 | +- We highly recommend dedicating at least four CPU threads or vcores to each |
| 7 | + NVME-oF gateway. A setup with only one CPU thread or vcore can work, but |
| 8 | + performance may be below expectations. It is preferable to dedicate servers |
| 9 | + to the NVMe-oF gateway service so that these and other Ceph services do not |
| 10 | + degrade each other. |
| 11 | +- Provide at minimum a 10 Gb/s network link to the Ceph public network. For |
| 12 | + best latency and throughput, we recommend 25 Gb/s or 100 Gb/s links. |
| 13 | +- Bonding of network links, with an appropriate xmit hash policy, is ideal for |
| 14 | + high availability. Note that the throughput of a given NVMe-oF client can be |
| 15 | + no higher than that of a single link within a bond. Thus, if four 10 Gb/s |
| 16 | + links are bonded together on gateway nodes, no one client will realize more |
| 17 | + than 10 Gb/s throughput. Remember that Ceph NVMe-oF gateways also communicate |
| 18 | + with backing OSDs over the public network at the same time, which contends |
| 19 | + with traffic between clients and gateways. Make sure to provision networking |
| 20 | + resources generously to avoid congestion and saturation. |
| 21 | +- Provision at least two NVMe-oF gateways in a gateway group, on separate Ceph |
| 22 | + cluster nodes, for a highly-available Ceph NVMe/TCP solution. |
23 | 23 | - Ceph NVMe-oF gateway containers comprise multiple components that communicate |
24 | | - with each other. If the nodes running these containers require HTTP/HTTPS |
25 | | - proxy configuration to reach container registries or other external resources, |
26 | | - these settings may confound this internal communication. If you experience |
27 | | - gRPC or other errors when provisioning NVMe-oF gateways, you may need to |
28 | | - adjust your proxy configuration. |
29 | | - |
| 24 | + with each other. If the nodes that run these containers require HTTP/HTTPS |
| 25 | + proxy configuration to reach container registries or other external |
| 26 | + resources, these settings may confound this internal communication. If you |
| 27 | + experience gRPC or other errors when provisioning NVMe-oF gateways, you may |
| 28 | + need to adjust your proxy configuration. |
0 commit comments