Skip to content

Commit fdefd95

Browse files
authored
Merge pull request #34426 from lpettyjo/OSDOCS-2189
OSDOCS-2189: AWS EFS CSI Operator (TP)
2 parents 3783d01 + aba6b11 commit fdefd95

12 files changed

+438
-8
lines changed

_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1161,6 +1161,8 @@ Topics:
11611161
File: persistent-storage-csi-migration
11621162
- Name: AWS Elastic Block Store CSI Driver Operator
11631163
File: persistent-storage-csi-ebs
1164+
- Name: AWS Elastic File Service CSI Driver Operator
1165+
File: persistent-storage-csi-aws-efs
11641166
- Name: Azure Disk CSI Driver Operator
11651167
File: persistent-storage-csi-azure
11661168
- Name: Azure Stack Hub CSI Driver Operator

modules/persistent-storage-csi-drivers-supported.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ The following table describes the CSI drivers that are installed with {product-t
1717
|CSI driver |CSI volume snapshots |CSI cloning |CSI resize
1818

1919
|AWS EBS | ✅ | - | ✅
20+
|AWS EFS (Tech Preview) | - | - | -
2021
|Google Cloud Platform (GCP) persistent disk (PD) (Tech Preview)| ✅ | - | ✅
2122
|Microsoft Azure Disk (Tech Preview) | ✅ | ✅ | ✅
2223
|Microsoft Azure Stack Hub | - | - | -
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="csi-dynamic-provisioning-aws-efs_{context}"]
6+
= Dynamic provisioning for AWS EFS
7+
8+
The AWS EFS CSI Driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too.
9+
The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 120 PVs from a single `StorageClass`/EFS volume.
10+
11+
[IMPORTANT]
12+
====
13+
Note that `PVC.spec.resources` is not enforced by EFS.
14+
15+
In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.
16+
17+
Using monitoring of EFS volume sizes in AWS is strongly recommended.
18+
====
19+
20+
.Prerequisites
21+
22+
* xref:../../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#efs-create-volume_persistent-storage-csi-aws-efs[Created AWS EFS volume(s).]
23+
24+
.Procedure
25+
26+
To enable dynamic provisioning:
27+
28+
. Create a `StorageClass` as follows:
29+
+
30+
[source,yaml]
31+
----
32+
kind: StorageClass
33+
apiVersion: storage.k8s.io/v1
34+
metadata:
35+
name: efs-sc
36+
provisioner: efs.csi.aws.com
37+
parameters:
38+
provisioningMode: efs-ap <1>
39+
fileSystemId: fs-a5324911 <2>
40+
directoryPerms: "700" <3>
41+
gidRangeStart: "1000" <4>
42+
gidRangeEnd: "2000" <4>
43+
basePath: "/dynamic_provisioning" <5>
44+
----
45+
<1> `provisioningMode` must be `efs-ap` to enable dynamic provisioning.
46+
<2> `fileSystemId` must be the ID of the EFS volume created manually above.
47+
<3> `directoryPerms` is the default permission of the root directory of the volume. In this case, the volume is accessible only by the owner.
48+
<4> `gidRangeStart` and `gidRangeEnd` set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.
49+
<5> `basePath` is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.
50+
+
51+
[NOTE]
52+
====
53+
A cluster admin can create several `StorageClasses`, each using a different EFS volume.
54+
====
55+
+
56+
. Create a PVC (or StatefulSet or Template) as usual, referring to the `StorageClass` created above.
57+
+
58+
[source,yaml]
59+
----
60+
apiVersion: v1
61+
kind: PersistentVolumeClaim
62+
metadata:
63+
name: test
64+
spec:
65+
storageClassName: efs-sc
66+
accessModes:
67+
- ReadWriteMany
68+
resources:
69+
requests:
70+
storage: 5Gi
71+
----
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="efs-create-volume_{context}"]
6+
= Creating and configuring access to EFS volumes in AWS
7+
8+
This procedure explains how to create and configure EFS volumes in AWS so that you can use them in {product-title}.
9+
10+
.Prerequisites
11+
12+
* AWS account credentials
13+
14+
.Procedure
15+
16+
To create and configure access to an EFS volume in AWS:
17+
18+
. On the AWS console, open https://console.aws.amazon.com/efs.
19+
20+
. Click *Create file system*:
21+
+
22+
* Enter a name for the file system.
23+
24+
* For *Virtual Private Cloud (VPC)*, select your {product-title}'s' virtual private cloud (VPC).
25+
26+
* Accept default settings for all other selections.
27+
28+
. Wait for the volume and mount targets to finish being fully created:
29+
30+
.. Go to https://console.aws.amazon.com/efs#/file-systems.
31+
32+
.. Click your volume, and on the *Network* tab wait for all mount targets to become available (~1-2 minutes).
33+
34+
. On the *Network* tab, copy the Security Group ID (you will need this in the next step).
35+
36+
. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.
37+
38+
. On the *Inbound rules* tab, click *Edit inbound rules*, and then add a new rule with the following settings to allow {product-title} nodes to access EFS volumes :
39+
+
40+
* *Type*: NFS
41+
42+
* *Protocol*: TCP
43+
44+
* *Port range*: 2049
45+
46+
* *Source*: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)
47+
+
48+
This step allows {product-title} to use NFS ports from the cluster.
49+
50+
. Save the rule.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="efs-security_{context}"]
6+
= AWS EFS security
7+
8+
The following information is important for AWS EFS security.
9+
10+
When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client's IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.
11+
12+
As a consequence, EFS volumes silently ignore FSGroup; {product-title} is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.
13+
14+
Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="efs-create-static-pv_{context}"]
6+
= Creating static PVs with AWS EFS
7+
8+
It is possible to use an AWS EFS volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.
9+
10+
.Prerequisites
11+
12+
* xref:../../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#efs-create-volume_persistent-storage-csi-aws-efs[Created AWS EFS volume(s).]
13+
14+
.Procedure
15+
16+
* Create the PV using the following YAML file:
17+
+
18+
[source,yaml]
19+
----
20+
apiVersion: v1
21+
kind: PersistentVolume
22+
metadata:
23+
name: efs-pv
24+
spec:
25+
capacity: <1>
26+
storage: 5Gi
27+
volumeMode: Filesystem
28+
accessModes:
29+
- ReadWriteMany
30+
- ReadWriteOnce
31+
persistentVolumeReclaimPolicy: Retain
32+
csi:
33+
driver: efs.csi.aws.com
34+
volumeHandle: fs-ae66151a <2>
35+
volumeAttributes:
36+
encryptInTransit: false <3>
37+
----
38+
<1> `spec.capacity` does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.
39+
<2> `volumeHandle` must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, `volumeHandle` should be ``<EFS volume ID>::<access point ID>``. For example: `fs-6e633ada::fsap-081a1d293f0004630`.
40+
<3> If desired, you can disable encryption in transit. Encryption is enabled by default.
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="efs-troubleshooting_{context}"]
6+
= AWS EFS troubleshooting
7+
8+
The following information provides guidance on how to troubleshoot issues with AWS EFS:
9+
10+
* The AWS EFS Operator and CSI driver run in namespace `openshift-cluster-csi-drivers`.
11+
12+
* To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:
13+
+
14+
[source, terminal]
15+
----
16+
$ oc adm must-gather
17+
[must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
18+
[must-gather ] OUT namespace/openshift-must-gather-xm4wq created
19+
[must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created
20+
[must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
21+
----
22+
23+
* To show AWS EFS Operator errors, view the `ClusterCSIDriver` status:
24+
+
25+
[source, terminal]
26+
----
27+
$ oc get clustercsidriver efs.csi.aws.com -o yaml
28+
----
29+
30+
* If a volume cannot be mounted to a pod (as shown in the output of the following command):
31+
+
32+
[source, terminal]
33+
----
34+
$ oc describe pod
35+
...
36+
Type Reason Age From Message
37+
---- ------ ---- ---- -------
38+
Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal
39+
Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded <1>
40+
Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
41+
----
42+
<1> Warning message indicating volume not mounted.
43+
+
44+
This error is frequently caused by AWS dropping packets between an {product-title} node and AWS EFS.
45+
+
46+
Check that the following are correct (see xref:../../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#efs-create-volume_persistent-storage-csi-aws-efs[Creating and configuring access to EFS volumes in AWS]):
47+
+
48+
--
49+
* AWS firewall and Security Groups
50+
51+
* Networking: port number and IP addresses
52+
--
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="persistent-storage-csi-olm-driver-uninstall_{context}"]
6+
= Uninstalling the {FeatureName} CSI driver
7+
8+
.Prerequisites
9+
* Access to the {product-title} web console.
10+
11+
To uninstall the {FeatureName} CSI driver:
12+
13+
. Log in to the web console.
14+
15+
. Stop all applications that use {FeatureName} persistent volumes (PVs).
16+
17+
. Click *administration* -> *CustomResourceDefinitions* -> *ClusterCSIDriver*.
18+
19+
. On the *Instances* tab, for *{provisioner}*, on the far left side, click the drop-down menu, and then click *Delete ClusterCSIDriver*.
20+
21+
. When prompted, click *Delete*.
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="persistent-storage-csi-olm-operator-install_{context}"]
6+
= Installing the {FeatureName} CSI Driver Operator
7+
8+
The {FeatureName} CSI Driver Operator is not installed in {product-title} by default. Use the following procedure to install and configure the {FeatureName} CSI Driver Operator in your cluster.
9+
10+
.Prerequisites
11+
* Access to the {product-title} web console.
12+
13+
.Procedure
14+
To install the {FeatureName} CSI Driver Operator from the web console:
15+
16+
. Log in to the web console.
17+
18+
. Install the {FeatureName} CSI Operator:
19+
20+
.. Click *Operators* -> *OperatorHub*.
21+
22+
.. Locate the {FeatureName} CSI Operator by typing *{FeatureName} CSI* in the filter box.
23+
24+
.. Click the *{FeatureName} CSI Driver Operator* button.
25+
+
26+
[IMPORTANT]
27+
====
28+
Be sure to select the *AWS EFS CSI Driver Operator* and not the *AWS EFS Operator*. The *AWS EFS Operator* is a community Operator and is not supported by Red Hat.
29+
====
30+
31+
.. On the *{FeatureName} CSI Driver Operator* page, click *Install*.
32+
33+
.. On the *Install Operator* page, ensure that:
34+
+
35+
* *All namespaces on the cluster (default)* is selected.
36+
* *Installed Namespace* is set to *openshift-cluster-csi-drivers*.
37+
38+
.. Click *Install*.
39+
+
40+
After the installation finishes, the {FeatureName} CSI Operator is listed in the *Installed Operators* section of the web console.
41+
42+
. Install the {FeatureName} CSI Driver:
43+
44+
.. Click *administration* -> *CustomResourceDefinitions* -> *ClusterCSIDriver*.
45+
46+
.. On the *Instances* tab, click *Create ClusterCSIDriver*.
47+
48+
.. Use the following YAML file:
49+
+
50+
[source,yaml]
51+
----
52+
apiVersion: operator.openshift.io/v1
53+
kind: ClusterCSIDriver
54+
metadata:
55+
name: efs.csi.aws.com
56+
spec:
57+
managementState: Managed
58+
----
59+
60+
.. Click *Create*.
61+
62+
.. Wait for the following Conditions to change to a "true" status:
63+
+
64+
* AWSEFSDriverCredentialsRequestControllerAvailable
65+
66+
* AWSEFSDriverNodeServiceControllerAvailable
67+
68+
* AWSEFSDriverControllerServiceControllerAvailable
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
4+
5+
[id="persistent-storage-csi-olm-operator-uninstall_{context}"]
6+
= Uninstalling the {FeatureName} CSI Driver Operator
7+
8+
All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator.
9+
10+
.Prerequisites
11+
* Access to the {product-title} web console.
12+
13+
.Procedure
14+
To uninstall the {FeatureName} CSI Driver Operator from the web console:
15+
16+
. Log in to the web console.
17+
18+
. Stop all applications that use {FeatureName} PVs.
19+
20+
. Delete all {FeatureName} PVs:
21+
22+
.. Click *Storage* -> *PersistentVolumeClaims*.
23+
24+
.. Select each PVC that is in use by the {FeatureName} CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click *Delete PersistentVolumeClaims*.
25+
26+
. Uninstall the {FeatureName} CSI Driver:
27+
+
28+
[NOTE]
29+
====
30+
Before you can uninstall the Operator, you must remove the CSI driver first.
31+
====
32+
33+
.. Click *administration* -> *CustomResourceDefinitions* -> *ClusterCSIDriver*.
34+
35+
.. On the *Instances* tab, for *{provisioner}*, on the far left side, click the drop-down menu, and then click *Delete ClusterCSIDriver*.
36+
37+
.. When prompted, click *Delete*.
38+
39+
. Uninstall the {FeatureName} CSI Operator:
40+
41+
.. Click *Operators* -> *Installed Operators*.
42+
43+
.. On the *Installed Operators* page, scroll or type {FeatureName} CSI into the *Search by name* box to find the Operator, and then click it.
44+
45+
.. On the upper, right of the *Installed Operators > Operator details* page, click *Actions* -> *Uninstall Operator*.
46+
47+
.. When prompted on the *Uninstall Operator* window, click the *Uninstall* button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.
48+
+
49+
After uninstalling, the {FeatureName} CSI Driver Operator is no longer listed in the *Installed Operators* section of the web console.
50+
51+
[NOTE]
52+
====
53+
Before you can destroy a cluster (`openshift-install destroy cluster`), you must delete the EFS volume in AWS. An {product-title} cluster cannot be destroyed when there is an EFS volume that uses the cluster's VPC. Amazon does not allow deletion of such a VPC.
54+
====

0 commit comments

Comments
 (0)