Skip to content

Commit 9e654f4

Browse files
Merge pull request #32 from gabemontero/readme-deploy-yaml
BUILD-261: readme restructure
2 parents acd4a23 + 08db344 commit 9e654f4

File tree

6 files changed

+499
-453
lines changed

6 files changed

+499
-453
lines changed

README.md

Lines changed: 35 additions & 453 deletions
Large diffs are not rendered by default.

docs/content-update-details.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Details around pushing Secret and ConfigMap updates to provisioned Volumes
2+
3+
### Excluded OCP namespaces
4+
5+
The current list of namespaces excluded from the controller's watches:
6+
7+
- kube-system
8+
- openshift-machine-api
9+
- openshift-kube-apiserver
10+
- openshift-kube-apiserver-operator
11+
- openshift-kube-scheduler
12+
- openshift-kube-controller-manager
13+
- openshift-kube-controller-manager-operator
14+
- openshift-kube-scheduler-operator
15+
- openshift-console-operator
16+
- openshift-controller-manager
17+
- openshift-controller-manager-operator
18+
- openshift-cloud-credential-operator
19+
- openshift-authentication-operator
20+
- openshift-service-ca
21+
- openshift-kube-storage-version-migrator-operator
22+
- openshift-config-operator
23+
- openshift-etcd-operator
24+
- openshift-apiserver-operator
25+
- openshift-cluster-csi-drivers
26+
- openshift-cluster-storage-operator
27+
- openshift-cluster-version
28+
- openshift-image-registry
29+
- openshift-machine-config-operator
30+
- openshift-sdn
31+
- openshift-service-ca-operator
32+
33+
The list is not yet configurable, but most likely will become so as the project's lifecycle progresses.
34+
35+
Allowing the disabling processing of updates, or switching the default for the system as not dealing with
36+
updates, but then allowing for opting into updates, is also under consideration.
37+
38+
Lastly, the current abilities to switch which Secret or ConfigMap a `Share` references, or even switch between
39+
a ConfigMaps and Secrets (and vice-versa of course) is under consideration, and may be removed during these
40+
still early stages of this driver's lifecycle.

docs/csi.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Current status with respect to the Kubernetes CSIVolumeSource API
2+
3+
So let's take each part of the [CSIVolumeSource](https://github.com/kubernetes/api/blob/71efbb18d63cd30604981514ac623a6be1d413bb/core/v1/types.go#L1743-L1771):
4+
5+
- for the `Driver` string field, it needs to be ["csi-driver-projected-resource.openshift.io"](https://github.com/openshift/csi-driver-projected-resource/blob/1fcc354faa31f624086265ea2228661a0fc2e7b1/pkg/client/client.go#L28).
6+
- for the `VolumeAttributes` map, this driver currently adds the "share" key (which maps the the `Share` instance your `Pod` wants to use) in addition to the
7+
elements of the `Pod` the kubelet stores when contacting the driver to provision the `Volume`. See [this list](https://github.com/openshift/csi-driver-projected-resource/blob/c3f1c454f92203f4b406dabe8dd460782cac1d03/pkg/hostpath/nodeserver.go#L37-L42).
8+
- the `ReadOnly` field is ignored, as the this driver's controller actively updates the `Volume` as the underlying `Secret` or `ConfigMap` change, or as
9+
the `Share` or the RBAC related to the `Share` change. **NOTE:** we are looking at providing `ReadOnly` volume support in future updates.
10+
- the `FSType` field is ignored. This driver by design only supports `tmpfs`, with a different mount performed for each `Volume`, in order to defer all SELinux concerns to the kubelet.
11+
- the `NodePublishSecretRef` field is ignored. The CSI `NodePublishVolume` and `NodeUnpublishVolume` flows gate the permission evaluation required for the `Volume`
12+
by performing `SubjectAccessReviews` against the reference `Share` instance, using the `serviceAccount` of the `Pod` as the subject.

docs/faq.md

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# Frequently Asked Questions
2+
3+
## What happens if the Share does not exist when you create a Pod that references it?
4+
5+
You'll see an event like:
6+
7+
```bash
8+
$ oc get events
9+
0s Warning FailedMount pod/my-csi-app MountVolume.SetUp failed for volume "my-csi-volume" : rpc error: code = InvalidArgument desc = the csi driver volumeAttribute 'share' reference had an error: share.projectedresource.storage.openshift.io "my-share" not found
10+
$
11+
```
12+
13+
And your Pod will never reach the Running state.
14+
15+
However, if the kubelet is still in a retry cycle trying to launch a Pod with a `Share` reference, if `Share` non-existence is the only thing preventing a mount, the mount should then succeed if the `Share` comes into existence.
16+
17+
## What happens if the Share is removed after the pod starts?
18+
19+
The data will be removed from the location specified by `volumeMount` in the `Pod`. Instead of
20+
21+
```bash
22+
$ oc rsh my-csi-app
23+
sh-4.4# ls -lR /data
24+
ls -lR /data
25+
total 312
26+
-rw-r--r--. 1 root root 3243 Jan 29 17:59 4653723971430838710-key.pem
27+
-rw-r--r--. 1 root root 311312 Jan 29 17:59 4653723971430838710.pem
28+
29+
```
30+
31+
You'll get
32+
33+
```bash
34+
oc rsh my-csi-app
35+
sh-4.4# ls -lR /data
36+
ls -lR /data
37+
/data:
38+
total 0
39+
sh-4.4#
40+
41+
```
42+
43+
## What happens if the ClusterRole or ClusterRoleBinding are not present when your newly created Pod tries to access an existing Share?
44+
45+
```bash
46+
$ oc get events
47+
LAST SEEN TYPE REASON OBJECT MESSAGE
48+
6s Normal Scheduled pod/my-csi-app Successfully assigned my-csi-app-namespace/my-csi-app to ip-10-0-136-162.us-west-2.compute.internal
49+
2s Warning FailedMount pod/my-csi-app MountVolume.SetUp failed for volume "my-csi-volume" : rpc error: code = PermissionDenied desc = subjectaccessreviews share my-share podNamespace my-csi-app-namespace podName my-csi-app podSA default returned forbidden
50+
$
51+
52+
```
53+
And your Pod will never get to the Running state.
54+
55+
## What happens if the Pod successfully mounts a Share, and later the permissions to access the Share are removed?
56+
57+
The data will be removed from the `Pod’s` volumeMount location.
58+
59+
Instead of
60+
61+
```bash
62+
$ oc rsh my-csi-app
63+
sh-4.4# ls -lR /data
64+
ls -lR /data
65+
/data:
66+
total 312
67+
-rw-r--r--. 1 root root 3243 Jan 29 17:59 4653723971430838710-key.pem
68+
-rw-r--r--. 1 root root 311312 Jan 29 17:59 4653723971430838710.pem
69+
sh-4.4#
70+
71+
```
72+
73+
You'll get
74+
75+
```bash
76+
oc rsh my-csi-app
77+
sh-4.4# ls -lR /data
78+
ls -lR /data
79+
/data:
80+
total 0
81+
sh-4.4#
82+
```
83+
84+
Do note that if your Pod copied the data to other locations, the Projected Resource driver cannot do anything about those copies. A big motivator for allowing
85+
some customization of the directory and file structure off of the `volumeMount` of the `Pod` is to help reduce the *need* to copy
86+
files. Hopefully you can mount that data directly at its final, needed, destination.
87+
88+
Also note that the Projected Resource does not try to reverse engineer which RoleBinding or ClusterRoleBinding allows your Pod to access the Share.
89+
The Kubernetes and OpenShift libraries for this are not currently structured to be openly consumed by other components. Nor did we entertain taking
90+
snapshots of that code to serve such a purpose. So instead of listening to RoleBinding or Role changes, on the Projected Resource controller’s re-list interval
91+
(which is configurable via start up argument on the command invoked from out DaemonSet, and whose default is 10 minutes), the controller will re-execute
92+
Subject Access Review requests for each Pod’s reference to each `Share` on the `Share` re-list and remove content if permission was removed. But as noted
93+
in the potential feature list up top, we'll continue to periodically revisit if there is a maintainable way of monitoring permission changes
94+
in real time.
95+
96+
Conversely, if the kubelet is still in a retry cycle trying to launch a Pod with a `Share` reference, if now resolved permission issues were the only thing preventing
97+
a mount, the mount should then succeed. Of course, as kubelet retry vs. controller re-list is the polling mechanism, and it is more frequent, the change in results would be more immediate in this case.

docs/install.md

Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
# Installing the Projected Resource CSI driver
2+
3+
## Before you begin
4+
5+
1. You must have an OpenShift cluster running 4.8 or later.
6+
7+
1. Grant `cluster-admin` permissions to the current user.
8+
9+
## Installing from a local clone of this repository (only developer preview level support)
10+
11+
1. Run the following command
12+
13+
```bash
14+
# change directories into you clone of this repository, then
15+
./deploy/deploy.sh
16+
```
17+
18+
You should see an output similar to the following printed on the terminal showing the creation or modification of the various
19+
Kubernetes resource:
20+
21+
```shell
22+
deploying hostpath components
23+
./deploy/0000_10_projectedresource.crd.yaml
24+
oc apply -f ./deploy/0000_10_projectedresource.crd.yaml
25+
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
26+
./deploy/00-namespace.yaml
27+
oc apply -f ./deploy/00-namespace.yaml
28+
namespace/csi-driver-projected-resource created
29+
./deploy/01-service-account.yaml
30+
oc apply -f ./deploy/01-service-account.yaml
31+
serviceaccount/csi-driver-projected-resource-plugin created
32+
./deploy/02-cluster-role.yaml
33+
oc apply -f ./deploy/02-cluster-role.yaml
34+
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
35+
./deploy/03-cluster-role-binding.yaml
36+
oc apply -f ./deploy/03-cluster-role-binding.yaml
37+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged unchanged
38+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create unchanged
39+
./deploy/csi-hostpath-driverinfo.yaml
40+
oc apply -f ./deploy/csi-hostpath-driverinfo.yaml
41+
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
42+
./deploy/csi-hostpath-plugin.yaml
43+
oc apply -f ./deploy/csi-hostpath-plugin.yaml
44+
service/csi-hostpathplugin created
45+
daemonset.apps/csi-hostpathplugin created
46+
16:21:25 waiting for hostpath deployment to complete, attempt #0
47+
```
48+
49+
## Installing from the master branch of this repository (only developer preview level support)
50+
51+
1. Run the following command
52+
53+
```bash
54+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/00-namespace.yaml
55+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/0000_10_projectedresource.crd.yaml
56+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/01-service-account.yaml
57+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/02-cluster-role.yaml
58+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/03-cluster-role-binding.yaml
59+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/csi-hostpath-driverinfo.yaml
60+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/csi-hostpath-plugin.yaml
61+
```
62+
63+
You should see an output similar to the following printed on the terminal showing the creation or modification of the various
64+
Kubernetes resource:
65+
66+
```shell
67+
namespace/csi-driver-projected-resource created
68+
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
69+
serviceaccount/csi-driver-projected-resource-plugin created
70+
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
71+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
72+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
73+
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
74+
service/csi-hostpathplugin created
75+
daemonset.apps/csi-hostpathplugin created
76+
```
77+
78+
79+
## Installing from a release specific branch of this repository (only developer preview level support)
80+
81+
1. Run the following command
82+
83+
```bash
84+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/00-namespace.yaml
85+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/0000_10_projectedresource.crd.yaml
86+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/01-service-account.yaml
87+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/02-cluster-role.yaml
88+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/03-cluster-role-binding.yaml
89+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/csi-hostpath-driverinfo.yaml
90+
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/csi-hostpath-plugin.yaml
91+
```
92+
93+
You should see an output similar to the following printed on the terminal showing the creation or modification of the various
94+
Kubernetes resource:
95+
96+
```shell
97+
namespace/csi-driver-projected-resource created
98+
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
99+
serviceaccount/csi-driver-projected-resource-plugin created
100+
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
101+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
102+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
103+
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
104+
service/csi-hostpathplugin created
105+
daemonset.apps/csi-hostpathplugin created
106+
```
107+
108+
109+
## Installing from the release page (only developer preview level support)
110+
111+
1. Run the following command
112+
113+
```bash
114+
oc apply -f --filename https://github.com/openshift/csi-driver-projected-resource/releases/download/v0.1.0/release.yaml
115+
```
116+
117+
You should see an output similar to the following printed on the terminal showing the creation or modification of the various
118+
Kubernetes resource:
119+
120+
```shell
121+
namespace/csi-driver-projected-resource created
122+
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
123+
serviceaccount/csi-driver-projected-resource-plugin created
124+
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
125+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
126+
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
127+
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
128+
service/csi-hostpathplugin created
129+
daemonset.apps/csi-hostpathplugin created
130+
```
131+
132+
133+
## Validate the installation
134+
135+
First, let's validate the deployment. Ensure all expected pods are running for the driver plugin, which in a
136+
3 node OCP cluster will look something like:
137+
138+
```shell
139+
$ oc get pods -n csi-driver-projected-resource
140+
NAME READY STATUS RESTARTS AGE
141+
csi-hostpathplugin-c7bbk 2/2 Running 0 23m
142+
csi-hostpathplugin-m4smv 2/2 Running 0 23m
143+
csi-hostpathplugin-x9xjw 2/2 Running 0 23m
144+
```

0 commit comments

Comments
 (0)