The Infinia Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators (CO) to manage the lifecycle of Infinia cluster's volumes over NVMEoF protocol.
Infinia Version | Infinia CSI Block driver version |
---|---|
>= 2.2 | >= v1.2 repository |
K8S Version | Infinia CSI Block driver version |
---|---|
Kubernetes >=1.22 | >= v1.0.1 repository |
All releases will be stored here - https://github.com/DDNStorage/infinia-csi-driver/releases
Feature | Feature Status | CSI Driver Version | Kubernetes Version | Implemented |
---|---|---|---|---|
Static Provisioning | GA | >= v1.0.1 | >=1.22 | yes |
Dynamic Provisioning | GA | >= v1.0.1 | >=1.22 | yes |
RW mode | GA | >= v1.0.1 | >=1.22 | yes |
RO mode | GA | >= v1.0.1 | >=1.22 | yes |
Raw block device | GA | >= v1.0.1 | >=1.22 | yes |
StorageClass Secrets | Beta | >= v1.0.1 | >=1.22 | yes |
Expand volume | GA | >= v1.2.0 | >=1.22 | yes |
- Kubernetes cluster must allow privileged pods, this flag must be set for the API server and the kubelet
(instructions):
--allow-privileged=true
- Required the API server and the kubelet feature gates
(instructions):
--feature-gates=VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandInUsePersistentVolumes=true,ExpandCSIVolumes=true,ExpandPersistentVolumes=true,CSINodeInfo=true
GA after
contains the last Kubernetes release in which you can still use a feature gate.
Feature | GA after |
---|---|
VolumePVCDataSource | 1.21 |
VolumeSnapshotDataSource | 1.22 |
CSINodeInfo | 1.22 |
ExpandCSIVolumes | 1.26 |
ExpandInUsePersistentVolumes | 1.26 |
- Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts (instructions)
-
Clone driver repository
git clone -b <version> https://github.com/DDNStorage/infinia-csi-driver.git
-
Prepare kubernetes host(s) - install nvme tools and enable nvme TCP kernel module
apt -y install linux-modules-extra-$(uname -r) apt install nvme-cli modprobe nvme-tcp
-
Edit
deploy/kubernetes/red-csi-driver-block-config.yaml
file. Driver configuration example:accounts: clu1/red/csiAccount: # [required] config section key is path to service account <cluster>/<tenant>/<service account name> apis: - https://<Infinia API IP or FQDN>:443 # [required] Infinia cluster REST API endpoint(s) password: 1234 # [required] Infinia cluster REST API password clu1/otherTenant/csiAccount: apis: https://10.3.4.4:443 # [required] Infinia cluster REST API endpoint(s) password: 1234 # [required] Infinia cluster REST API password
Note : List of available configuration parameters in configuration section - Defaults and params
- Create Kubernetes namespace:
kubectl create namespace red-block-csi
- Create Kubernetes secret from the file:
kubectl create secret generic red-csi-driver-block-config --from-file=deploy/kubernetes/red-csi-driver-block-config.yaml -n red-block-csi
- Register driver to Kubernetes:
kubectl apply -f deploy/kubernetes/red-csi-driver-block.yaml
- Installation is done
To install the Chart into your Kubernetes cluster
Run commands on top of https://github.com/DDNStorage/infinia-csi-driver.git
repository
-
Prepare RED cluster configuration in
./deploy/charts/red-csi-driver-block/values.yaml
fileSpecify RED Cluster API endpoint(s), user(s) and password(s) into config section of the file by the schema
config: secretName: red-csi-driver-block-config accounts: clu1/red/csiAccount: apis: - https://<IP or FQDN>:443 # [required] RED REST API endpoint(s) password: <PASSWORD> # [required] RED REST API password
It will create secret based on configuration in the same namespace as specified during
helm install
-
Run the installation
helm install --create-namespace --namespace red-block-csi red-csi-driver-block ./deploy/charts/red-csi-driver-block
- After installation succeeds, you can get a status of Chart
helm status --namespace red-block-csi red-csi-driver-block
Default | Config | Parameter | Desc |
---|---|---|---|
- | apis | - | List of Infinia API entrypoints |
- | password | - | Infinia API user password |
- | zone | - | Zone to match topology.kubernetes.io/zone |
- | insecureSkipVerify | - | TLS certificates check will be skipped when true (default: 'true') |
[] | default_instance_ids | instances | Infinia cluster instance IDs to expose |
4420 | default_data_port | dataport | Volume expose port |
- | owneruid | owneruid | custom user uid (numeric) |
- | groupuid | groupuid | custom group uid (numeric) |
- | perms | perms | custom permissions (octal) |
- | account | configuration account path (cluster/tenant/serviceAccountName) | |
- | service | Infinia service path (subtenant/serviceName) |
Note: all default parameters (Default
) may be overwritten in Infinia CSI driver configuration or by specific StorageClass or PV configuration parameters.
Note: default_instance_ids
CSI driver config parameter is []int. Provides default volume allocation for expose.
Note: instances
parameter is string with ',' delimeter. It could be changed when Distribution policy
be implemented.
Note: owneruid
and groupuid
must be defined together as numeric values in string representation
Note: perms
must be octal value in string representation
Storage class secrets can be used to override config values or not use the config file/secret at all. This is a convenient way of using multiple service accounts in the driver without having to change the config.
List of storage class secret parameters:
Parameter | Required | Description |
---|---|---|
apis | yes | Comma separated list of api endpoints for service account |
password | yes | Password for service account |
insecureSkipVerify | no | Defines is self signed certificates should be allowed. Default is true |
First, create the secret
kubectl create -n red-block-csi secret generic red-csi-multi-tenancy1 --from-literal=apis=https://10.10.1.11:443,https://10.10.1.13:443 --from-literal=password=12341234
Now we can create a storage class that will use the secret
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: red-block-csi-driver-sc-with-secret
provisioner: block.csi.red.ddn.com
parameters:
account: c1/red/csi-multi-tenancy
service: red/csi-multi-tenancy
csi.storage.k8s.io/provisioner-secret-name: red-csi-multi-tenancy1
csi.storage.k8s.io/provisioner-secret-namespace: red-block-csi
csi.storage.k8s.io/controller-expand-secret-name: red-csi-multi-tenancy1
csi.storage.k8s.io/controller-expand-secret-namespace: red-block-csi
csi.storage.k8s.io/node-stage-secret-name: red-csi-multi-tenancy1
csi.storage.k8s.io/node-stage-secret-namespace: red-block-csi
csi.storage.k8s.io/node-publish-secret-name: red-csi-multi-tenancy1
csi.storage.k8s.io/node-publish-secret-namespace: red-block-csi
---
Apply storage class manifest
kubectl apply -f examples/kubernetes/sc-with-secret.yaml
Now we can create a PVC and a pod as usual
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: red-block-csi-driver-pvc-nginx-dynamic-mount-sc-secret
spec:
storageClassName: red-block-csi-driver-sc-with-secret
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Pod
metadata:
name: nginx-dynamic-mount-volume
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /mountedDisk
name: red-block-csi-driver-data
volumes:
- name: red-block-csi-driver-data
persistentVolumeClaim:
claimName: red-block-csi-driver-pvc-nginx-dynamic-mount-sc-secret
readOnly: false
kubectl apply -f examples/kubernetes/pvc-pod-sc-with-secret.yaml
To update already existing Infinia Block CSI driver configuration stored in k8s secret
kubectl create secret generic red-csi-driver-block-config --save-config --dry-run=client --from-file=deploy/kubernetes/red-csi-driver-block-config.yaml -n red-block-csi -o yaml | kubectl apply -f -
Note The driver periodically (every 3 seconds) checks the configuration for changes but secret to host propagation could take some additional time
Node Do not delete secret's configuration section if there are volumes related to this section are exist. Need to delete volumes first
Configuring multiple controller volume replicas We can configure this by changing the deploy/kubernetes/red-csi-driver-block.yaml:
change the following line in controller service config
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: red-block-csi-controller
spec:
serviceName: red-block-csi-controller-service
replicas: 1 # Change this to 2 or more.
Infinia CSI driver's pods should be running after installation:
$ kubectl get pods
red-block-csi-controller-0 4/4 Running 0 23h
red-block-csi-controller-1 4/4 Running 0 23h
red-block-csi-node-6cmsj 2/2 Running 0 23h
red-block-csi-node-wcrgk 2/2 Running 0 23h
red-block-csi-node-xtmgv 2/2 Running 0 23h
Storage classes provide the capability to define parameters per storageClass instead of using config values Defaults and params
This is very useful to provide flexibility while using the same driver.
For example, we can use one storageClass to create volume.
A couple of possible use cases :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: red-block-csi-driver-static-target-tg
provisioner: block.csi.red.ddn.com
allowVolumeExpansion: true
parameters:
account: "clu1/red/csiAccount"
service: "red/csiService"
Where:
account
- exact Infinia service account path (<cluster/tenant/serviceAccountName>)service
- exact Infinia service path (<subtenant/serviceName>)
Full list of default values and parameters
NOTE: accessModes: ReadWriteMany
cannot be run with volumeMode: Filesystem
.
For dynamic volume provisioning, the administrator needs to set up a StorageClass pointing to the driver.
In this case Kubernetes generates volume name automatically (for example pvc-red-cfc67950-fe3c-11e8-a3ca-005056b857f8
).
Default driver configuration may be overwritten in parameters
section:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: red-csi-driver-block-sc-nginx-dynamic
provisioner: block.csi.red.ddn.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
#- matchLabelExpressions: # use to following lines to configure topology by zones
# - key: topology.kubernetes.io/zone
# values:
# - us-east
parameters:
account: "clu1/red/csiAccount" # [REQUIRED] exact Infinia service account path (<cluster/teant/serviceAccountName>)
service: "red/csiService" # [REQUIRED] exact Infinia service path (<subtenant/serviceName>)
Name | Description | Example |
---|---|---|
account |
Infinia CSI driver configuration key as well path to service account | clu1/red/csiAccount |
service |
Exact Infinia service path (<subtenant/serviceName>) | red/csiService |
Run Nginx pod with dynamically provisioned volume:
kubectl apply -f examples/kubernetes/nginx-dynamic-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-dynamic-volume.yaml
The driver can use already existing Infinia volumes that has exports (automatic export will be added soon) in this case, StorageClass, PersistentVolume and PersistentVolumeClaim should be configured.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: red-csi-driver-block-sc-nginx-persistent
provisioner: block.csi.red.ddn.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
apiVersion: v1
kind: PersistentVolume
metadata:
name: red-csi-driver-block-pv-nginx-persistent
labels:
name: red-csi-driver-block-pv-nginx-persistent
spec:
storageClassName: red-csi-driver-block-sc-nginx-persistent
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
csi:
driver: block.csi.red.ddn.com
volumeHandle: clu1/red/csiAccount:red/csiService/volumeName
# parameters map
volumeAttributes:
#instances: '1,2,3'
#mountOptions: # list of options for `mount` command
# - noatime #
CSI Parameters:
Name | Description | Example |
---|---|---|
driver |
installed Infinia CSI block driver name "block.csi.red.ddn.com" | block.csi.red.ddn.com |
volumeHandle |
CSI VolumeID [cluster/tenant/serviceAccountName:subtenant/serviceName/volumeName] | clu1/red/csiAccount:red/csiService/vol1 |
volumeAttributes |
CSI driver parametrs map Defaults and params |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: red-csi-driver-block-pvc-nginx-persistent
spec:
storageClassName: red-csi-driver-block-sc-nginx-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
# to create 1-1 relationship for pod - persistent volume use unique labels
name: red-csi-driver-block-sc-nginx-persistent
Run nginx server using PersistentVolume.
Note: Those Infinia objects MUST exist before static volume usage:
service account: cluster/tenant/serviceAccount
.
service : cluster/tenant/subtenant/serviceName
.
volume : cluster/tenant/subtenant/dataset/volume
.
kubectl apply -f examples/kubernetes/nginx-persistent-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-persistent-volume.yaml
The StorageClass must have allowVolumeExpansion: true
set:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: red-csi-driver-block-sc-nginx-dynamic
provisioner: block.csi.red.ddn.com
allowVolumeExpansion: true
parameters:
account: "clu1/red/csiAccount" # [REQUIRED] exact Infinia service account path (<cluster/teant/serviceAccountName>)
service: "red/csiService" # [REQUIRED] exact Infinia service path (<subtenant/serviceName>)
Create a PVC using this StorageClass:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: red-csi-driver-block-pvc-nginx-dynamic-expand
spec:
storageClassName: red-csi-driver-block-sc-nginx-dynamic
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
Create a pod that uses this PVC:
apiVersion: v1
kind: Pod
metadata:
name: nginx-dynamic-expand-volume
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /mountedDisk
name: red-block-csi-driver-data
volumes:
- name: red-block-csi-driver-data
persistentVolumeClaim:
claimName: red-csi-driver-block-pvc-nginx-dynamic-expand
readOnly: false
Delete pod before expansion:
kubectl delete pod nginx-dynamic-expand-volume
To expand the volume, edit the PVC to request more storage:
kubectl patch pvc red-csi-driver-block-pvc-nginx-dynamic-expand --type=json -p='[{"op": "replace", "path": "/spec/resources/requests/storage", "value": "5Gi"}]'
Create the pod now that uses this PVC.
Verify the expansion:
kubectl get pvc red-csi-driver-block-pvc-nginx-dynamic-expand
Verify the filesystem size inside the pod:
kubectl exec -it nginx-dynamic-expand-volume -- df -h /mountedDisk
Using the same files as for installation:
# delete driver
kubectl delete -f deploy/kubernetes/red-csi-driver-block.yaml
# delete secret
kubectl delete secret red-csi-driver-block-config
If you want to delete your Chart, use this command
helm uninstall -n red-block-csi red-csi-driver-block
If you want to upgrade your Chart to different RED CSI driver release
- Change driver.tag value in ./deploy/charts/red-csi-driver-block/values.yaml file
- Apply command
helm upgrade --namespace red-block-csi red-csi-driver-block ./deploy/charts/red-csi-driver-block
- Increase the Quota of the Dataset
Run the following command to update the dataset quota:
redcli dataset update <dataset> -t <tenant> -s <subtenant> -b <new_quota_size>
- [dataset] – Name of the dataset
- [tenant] – Tenant name
- [subtenant] – Subtenant name
- [new_quota_size] – New quota size (e.g., 100Gi)
- Remount the Volume to the Pod
To apply the changes, delete the existing pod and recreate it.
- Delete the existing pod:
kubectl delete pod <pod_name>
- [pod_name] – Name of the pod
Do NOT delete the Persistent Volume (PV) or Persistent Volume Claim (PVC) during pod deletion to prevent data loss. The pod can be safely deleted and recreated without affecting stored data.
- Recreate the pod using the YAML manifest:
kubectl apply -f <pod_manifest.yaml>
• [pod_manifest.yaml] – Path to the YAML file defining the pod
After these steps, the pod should be running with the updated storage quota.
- Show installed drivers:
kubectl get csidrivers kubectl describe csidrivers
- Error:
Make sure kubelet configured with
MountVolume.MountDevice failed for volume "pvc-ns-<...>" : driver name block.csi.red.ddn.com not found in the list of registered CSI drivers
--root-dir=/var/lib/kubelet
, otherwise update paths in the driver yaml file (all requirements). - "VolumeSnapshotDataSource" feature gate is disabled:
vim /var/lib/kubelet/config.yaml # ``` # featureGates: # VolumeSnapshotDataSource: true # ``` vim /etc/kubernetes/manifests/kube-apiserver.yaml # ``` # - --feature-gates=VolumeSnapshotDataSource=true # ```
- Driver logs
kubectl logs --all-containers $(kubectl get pods | grep red-block-csi-controller | awk '{print $1}') -f kubectl logs --all-containers $(kubectl get pods | grep red-block-csi-node | awk '{print $1}') -f
- Show termination message in case driver failed to run:
kubectl get pod red-csi-block-controller-0 -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
- Configure Docker to trust insecure registries:
# add `{"insecure-registries":["10.3.199.92:5000"]}` to: vim /etc/docker/daemon.json service docker restart