|
| 1 | +--- |
| 2 | +sidebar_label: CSI Protocol |
| 3 | +--- |
| 4 | + |
| 5 | +# CSI Protocol |
| 6 | + |
| 7 | +:::warning |
| 8 | +Ozone CSI support is still in alpha phase and buckets can be mounted only via 3rd party S3 compatible Fuse implementation (like Goofys). |
| 9 | +Fuse over S3 can provide only limited performance compared to a native Fuse file system. |
| 10 | +Long-term Ozone may support a custom solution to mount buckets which provides better user experience (with fuse or NFS or any other solution). |
| 11 | +Until that CSI is recommended to use only if you can live with this limitation and your use case is tested carefully. |
| 12 | +::: |
| 13 | + |
| 14 | +`Container Storage Interface` (CSI) will enable storage vendors (SP) to develop a plugin once and have it work across a number of container orchestration (CO) systems like Kubernetes or Yarn. |
| 15 | + |
| 16 | +To get more information about CSI at [SCI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) |
| 17 | + |
| 18 | +CSI defined a simple gRPC interface with 3 interfaces (Identity, Controller, Node). It defined how the Container Orchestrator can request the creation of a new storage space or the mount of the newly created storage but doesn't define how the storage can be mounted. |
| 19 | + |
| 20 | + |
| 21 | + |
| 22 | +By default Ozone CSI service uses a S3 fuse driver ([goofys](https://github.com/kahing/goofys)) to mount the created Ozone bucket. Implementation of other mounting options such as a dedicated NFS server or native Fuse driver is work in progress. |
| 23 | + |
| 24 | +Ozone CSI is an implementation of CSI, it can make possible of using Ozone as a storage volume for a container. |
| 25 | + |
| 26 | +## Getting started |
| 27 | + |
| 28 | +First of all, we need an Ozone cluster with s3gateway, and its OM RPC port and s3gateway port must be visible to CSI pod, |
| 29 | +because CSIServer will access OM to create or delete a bucket, also, CSIServer will publish volume by creating a mount point to s3g |
| 30 | +through goofys. |
| 31 | + |
| 32 | +If you don't have an Ozone cluster on Kubernetes, you can reference [Kubernetes](../../02-quick-start/01-installation/02-kubernetes.md) to create one. Use the resources from `kubernetes/examples/ozone` where you can find all the required Kubernetes resources to run cluster together with the dedicated Ozone CSI daemon (check `kubernetes/examples/ozone/csi`) |
| 33 | + |
| 34 | +Now, create the CSI related resources by execute the follow command. |
| 35 | + |
| 36 | +```bash |
| 37 | +kubectl create -f /ozone/kubernetes/examples/ozone/csi |
| 38 | +``` |
| 39 | + |
| 40 | +## Create pv-test and visit the result |
| 41 | + |
| 42 | +Create pv-test related resources by execute the follow command. |
| 43 | + |
| 44 | +```bash |
| 45 | +kubectl create -f /ozone/kubernetes/examples/ozone/pv-test |
| 46 | +``` |
| 47 | + |
| 48 | +Attach the pod `scm-0` and put a key into the `/s3v/pvc*` bucket. |
| 49 | + |
| 50 | +```bash |
| 51 | +kubectl exec -it scm-0 bash |
| 52 | +[hadoop@scm-0 ~]$ ozone sh bucket list s3v |
| 53 | +[ { |
| 54 | + "metadata" : { }, |
| 55 | + "volumeName" : "s3v", |
| 56 | + "name" : "pvc-861e2d8b-2232-4cd1-b43c-c0c26697ab6b", |
| 57 | + "storageType" : "DISK", |
| 58 | + "versioning" : false, |
| 59 | + "creationTime" : "2020-06-11T08:19:47.469Z", |
| 60 | + "encryptionKeyName" : null |
| 61 | +} ] |
| 62 | +[hadoop@scm-0 ~]$ ozone sh key put /s3v/pvc-861e2d8b-2232-4cd1-b43c-c0c26697ab6b/A LICENSE.txt |
| 63 | +``` |
| 64 | +
|
| 65 | +Now, let's forward port of the `ozone-csi-test-webserver-7cbdc5d65c-h5mnn` to see the UI through the web browser. |
| 66 | +
|
| 67 | +```bash |
| 68 | +kubectl port-forward ozone-csi-test-webserver-7cbdc5d65c-h5mnn 8000:8000 |
| 69 | +``` |
| 70 | +
|
| 71 | +Eventually, we can see the result from `http://localhost:8000/` |
| 72 | +
|
| 73 | + |
0 commit comments