Skip to content

Commit 2a39c44

Browse files
authored
Merge pull request #366 from Madhu-1/migration-doc
doc: add migration doc for the csi-operator
2 parents 561cdaa + 321de85 commit 2a39c44

File tree

1 file changed

+105
-0
lines changed

1 file changed

+105
-0
lines changed

docs/migration.md

Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
# Ceph-CSI to Ceph-CSI-Operator Migration Guide
2+
3+
This guide provides two migration paths:
4+
5+
1. Migration from YAML-based Ceph-CSI deployment
6+
7+
2. Migration from Helm-based Ceph-CSI deployment
8+
9+
Ceph-CSI v3.16+ officially recommends using Ceph-CSI-Operator as the supported deployment mechanism.
10+
11+
## Why migrate?
12+
13+
- Operator provides declarative CRD-based management
14+
- Automated reconciliation & healing
15+
- Cleaner upgrades & version lifecycle
16+
- Matches Kubernetes best practices
17+
18+
19+
> [!WARNING]
20+
> Important Warning (Read Before Proceeding)
21+
22+
**After removing the existing Ceph-CSI components, new Pods cannot mount PVCs, and new PVCs or VolumeSnapshots cannot be created until migration is completed.
23+
Existing Pods using RBD/CephFS kernel mounter will continue to work, as long as they are not restarted.
24+
Any Pod that restarts or gets rescheduled before the new CSI driver is running will fail to mount volumes.
25+
Plan maintenance windows accordingly.**
26+
27+
> [!WARNING]
28+
> The **Ceph-CSI-Operator does *not* automatically create StorageClasses or VolumeSnapshotClasses**.
29+
>
30+
> Legacy Ceph-CSI Helm charts provided automated creation of these objects, but the operator does not include this functionality.
31+
32+
33+
34+
## Common Preparation Steps (Applies to Both YAML & Helm Tracks)
35+
36+
### Backup the existing Ceph-CSI configuration if you have any
37+
38+
```bash
39+
mkdir -p backup/ceph-csi
40+
kubectl get configmap -n ceph-csi -o yaml > backup/ceph-csi/configmap.yaml
41+
kubectl get deployment,daemonset -n ceph-csi -o yaml > backup/ceph-csi/workloads.yaml
42+
kubectl get clusterrole,clusterrolebinding,serviceaccount,role,rolebinding -n ceph-csi -o yaml > backup/ceph-csi/rbac.yaml
43+
kubectl get csidriver -oyaml > backup/ceph-csi/csidriver.yaml
44+
```
45+
46+
**Note:** Replace the namespace where ceph-CSI resources are created.
47+
48+
49+
### Remove Existing Ceph-CSI Components (YAML-based deployments)
50+
51+
```bash
52+
kubectl delete -f backup/ceph-csi/workloads.yaml
53+
kubectl delete -f backup/ceph-csi/csidriver.yaml
54+
kubectl delete -f backup/ceph-csi/rbac.yaml
55+
kubectl delete -f backup/ceph-csi/configmaps.yaml
56+
```
57+
58+
Make sure the above yamls contains only the ceph-CSI resources before issuing delete.
59+
60+
### Remove Existing Ceph-CSI Helm Release (Helm-based deployments)
61+
62+
```bash
63+
helm uninstall ceph-csi -n ceph-csi
64+
```
65+
66+
### Install the Ceph-CSI-Operator
67+
68+
Follow the official [Installation Guide](installation.md) to deploy and configure the Ceph-CSI-Operator.
69+
70+
After installing the operator and creating the required CR (Drivers,CephConnection,ClientProfiles)
71+
72+
Ensure the CR definitions include fields that match your previous Ceph-CSI configuration (monitors, pools etc)
73+
74+
> [!WARNING]
75+
> Important Migration Note for `ClusterID` Handling
76+
77+
In legacy deployments, `clusterID` is defined in:
78+
79+
- ConfigMap
80+
- StorageClass
81+
- VolumeSnapshotClass
82+
83+
In the operator-based deployments, these must be represented through a ClientProfile CR.
84+
85+
### Requirement: ClientProfile CR
86+
87+
You must create a ClientProfile whose name matches the old `clusterID`, because:
88+
89+
- StorageClasses and SnapshotClasses will reference the ClientProfile name.
90+
91+
For example the `clusterID` was `ceph-csi` the ClientProfile CR looks like below
92+
93+
```yaml
94+
apiVersion: csi.ceph.io/v1
95+
kind: ClientProfile
96+
metadata:
97+
name: ceph-csi
98+
namespace: ceph-csi-operator-system
99+
spec:
100+
cephConnectionRef:
101+
name: ceph-connection
102+
...
103+
```
104+
105+
Once all the pods are up and running, the migration is complete. We can delete the backup files now.

0 commit comments

Comments
 (0)