This guide provides a simple example of how to use the Lustre CSI driver to import and connect to an existing Lustre instance that has been pre-provisioned by an administrator.
If you haven't already provisioned a Google Cloud Managed Lustre instance, please follow the instructions to create one. Make sure to specify the gke-support-enabled flag when creating the instance.
Before applying the Persistent Volume (PV) and Persistent Volume Claim (PVC) manifest, update ./examples/pre-prov/preprov-pvc-pv.yaml with the correct values:
volumeHandle: Update with the correct project ID, zone, and Lustre instance name.storage: This value should match the size of the underlying Lustre instance.volumeAttributes:ipmust point to the Lustre instance IP.filesystemmust be the Lustre instance's filesystem name.- Alternatively, you can use
mountpointwhich combines both (e.g.,10.108.80.4@tcp:/fs). Ifmountpointis provided, it will overrideipandfilesystem.
Apply the example PV and PVC configuration:
kubectl apply -f ./examples/pre-prov/preprov-pvc-pv.yamlkubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
preprov-pvc Bound preprov-pv 9000Gi RWX 76skubectl apply -f ./examples/pre-prov/preprov-pod.yamlIt may take up to a few minutes for the Pod to reach the Running state:
kubectl get podsExpected output:
NAME READY STATUS RESTARTS AGE
lustre-pod 1/1 Running 0 11sOnce you've completed your experiment, delete the Pod and PVC.
Note: The PV was created with a
"Retain"persistentVolumeReclaimPolicy, meaning that deleting the PVC will not remove the PV or the underlying Lustre instance.
kubectl delete pod lustre-pod
kubectl delete pvc preprov-pvcAfter deleting the Pod and PVC, the PV should report a Released state:
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
preprov-pv 9000Gi RWX Retain Released default/preprov-pvc 2m28sTo reuse the PV, remove the claim reference (claimRef):
kubectl patch pv preprov-pv --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'The PV should now report an Available status, making it ready to be bound to a new PVC:
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
preprov-pv 9000Gi RWX Retain Available 19mIf the PV is no longer needed, delete it.
Note: Deleting the PV does not remove the underlying Lustre instance.
kubectl delete pv preprov-pv