You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs_user/modules/con_changes-to-cephFS-via-NFS.adoc
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,10 +8,18 @@ Before you begin the adoption, review the following information to understand th
8
8
9
9
* If the {OpenStackShort} {rhos_prev_ver} deployment uses CephFS through NFS as a back end for {rhos_component_storage_file_first_ref}, you cannot directly import the `ceph-nfs` service on the {OpenStackShort} Controller nodes into {rhos_acro} {rhos_curr_ver}. In {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a clustered NFS service that is directly managed on the {Ceph} cluster. Adoption with the `ceph-nfs` service involves a data path disruption to existing NFS clients.
10
10
11
-
* On {OpenStackShort} {rhos_prev_ver}, Pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolated `StorageNFS` network. The Controller nodes have ordering and collocation constraints established between this VIP, `ceph-nfs`, and the {rhos_component_storage_file_first_ref} share manager service. Prior to adopting {rhos_component_storage_file}, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that you can decommission after completing the {rhos_acro} adoption.
11
+
* On {OpenStackShort} {rhos_prev_ver}, Pacemaker manages the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolated `StorageNFS` network. The Controller nodes have ordering and collocation constraints established between this VIP, `ceph-nfs`, and the {rhos_component_storage_file_first_ref} share manager service. Prior to adopting {rhos_component_storage_file}, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that you can decommission after completing the {rhos_acro} adoption.
12
12
13
13
* In {Ceph} {CephVernum}, a native clustered Ceph NFS service has to be deployed on the {Ceph} cluster by using the Ceph Orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service eventually replaces the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it establishes all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. After the service is decommissioned, you can re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime.
14
14
15
15
* To ensure that NFS users are not required to make any networking changes to their existing workloads, assign an IP address from the same isolated `StorageNFS` network to the clustered Ceph NFS service. NFS users only need to discover and re-mount their shares by using new export paths. When the adoption is complete, {rhos_acro} users can query the {rhos_component_storage_file} API to list the export locations on existing shares to identify the preferred paths to mount these shares. These preferred paths correspond to the new clustered Ceph NFS service in contrast to other non-preferred export paths that continue to be displayed until the old isolated, standalone NFS service is decommissioned.
16
16
17
+
* When you migrate your workloads from the old NFS service, you must ensure that exports are not consumed from both the old NFS service and the new clustered Ceph NFS service at the same time. This simultaneous access to both services is considered dangerous and bypasses the protections for concurrent access that is ensured by the NFS protocol. When you migrate the workloads to use exports from the new NFS service, you must ensure that you migrate the use of each export entirely so that no part of the workload stays connected to the old NFS service.
18
+
19
+
* You can no longer control the old Pacemaker-managed `ceph-nfs` service through the {rhos_prev_long} {OpenStackPreviousInstaller} after the control plane adoption is complete. This means that there is no support for updating the NFS Ganesha software, or changing any configuration. While data is protected from server crashes or restarts, high availability and data recovery is still limited, and these maintenance issues are no longer visible to {rhos_component_storage_file}.
20
+
21
+
* Cloud administrators must ensure a reasonably short window to switch over all end-user workloads to the new NFS service.
22
+
23
+
* While the old `ceph-nfs` service only supported NFS version 4.1 and later, the new clustered NFS service supports NFS protocols 3 and 4.1 and later. Mixing protocol versions with an export results in unintended consequences. You should mount a given share across all clients by using a consistent NFS protocol version.
24
+
17
25
For more information on setting up a clustered NFS service, see xref:creating-a-ceph-nfs-cluster_ceph-prerequisites[Creating an NFS Ganesha cluster].
0 commit comments