You will need some form of persistent storage on your cluster. A storage class is a general definition that allows containers to create persistent volumes any time they need one. You only need one type of storage, but there are many options. CephFS is a great options that crucially will allow multiple pods to write to the same volume at the same time which is essential for fully scalable pods. Ceph RBD does not allow this. NFS is also a great option if you are trying to share data from another system. Choose whichever option you need, I use CephFS for my containers and NFS to share my media from my unraid NAS to my kubernetes cluster.
Create Namespace for cephfs
kubectl create ns cephfsGet Ceph Admin Key from Ceph server
ceph auth get-key client.adminCreate a CephFS Secret
kubectl create secret generic ceph-secret-admin --from-literal=key="<client.admin key>" -n cephfsApply CephFS Provisioner
kubectl apply -f storage/Ceph-FS-Provisioner.yamlApply Cephfs Storage Class
kubectl apply -f storage/Ceph-FS-StorageClass.yamlPatch to make default class
kubectl patch storageclass cephfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'Apply Ceph RBD Provisioner
kubectl create -f storage/Ceph-RBD-Provisioner.yaml -n kube-systemGet Ceph Admin Key from Ceph server
ceph auth get-key client.adminCreate Ceph RBD secret
kubectl create secret generic ceph-secret \
--type="kubernetes.io/rbd" \
--from-literal=key='<client.admin key>' \
--namespace=kube-systemCreate a seperate pool for Ceph RBD
ceph --cluster ceph osd pool create kube 128 128
ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'Get Ceph Admin Key for the created pool
ceph --cluster ceph auth get-key client.kubekubectl create secret generic ceph-secret-kube \
--type="kubernetes.io/rbd" \
--from-literal=key='<client.kube key>' \
--namespace=kube-systemCreate Ceph RBD Storage Class
kubectl create -f storage/Ceph-RBD-StorageClass.yaml