Skip to content
This repository was archived by the owner on May 24, 2020. It is now read-only.

Commit 6b4a9a3

Browse files
committed
Add helm deploy instructions
Signed-off-by: John Strunk <jstrunk@redhat.com>
1 parent 68f4aec commit 6b4a9a3

File tree

1 file changed

+178
-0
lines changed

1 file changed

+178
-0
lines changed

helm/README.md

Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
# Deploying GCS with Helm
2+
3+
## Download Helm
4+
5+
Helm can be obtained from the [Helm
6+
releases](https://github.com/helm/helm/releases) page.
7+
8+
## Install Helm & Tiller
9+
10+
Once you have downloaded Helm, you need to install it. The Helm client is
11+
installed locally, and Tiller runs within your Kubernetes cluster.
12+
13+
Assuming you have RBAC installed on your cluster, you need to create a service
14+
account for Helm to use. The provided `helm-sa.yaml` creates a service account
15+
in the `kube-system` namespace called "tiller" and gives it cluster admin
16+
permissions. This allows Tiller to deploy charts anywhere in the cluster.
17+
18+
**Note: These instructions do not set up TLS security for Helm, so it should
19+
not be considered a secure configuration. Patches welcome.**
20+
21+
Install the SA:
22+
23+
```bash
24+
$ kubectl apply -f helm-sa.yaml
25+
serviceaccount "tiller" created
26+
clusterrolebinding.rbac.authorization.k8s.io "tiller" created
27+
```
28+
29+
Install Tiller & initialize local Helm state:
30+
31+
```bash
32+
$ helm --kubeconfig=../deploy/kubeconfig init --service-account tiller
33+
$HELM_HOME has been configured at /home/jstrunk/.helm.
34+
35+
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
36+
37+
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
38+
To prevent this, run `helm init` with the --tiller-tls-verify flag.
39+
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
40+
Happy Helming!
41+
```
42+
43+
Verify it is installed:
44+
45+
```bash
46+
$ helm --kubeconfig=../deploy/kubeconfig version
47+
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
48+
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
49+
```
50+
51+
## Configure GCS for your cluster
52+
53+
There isn't much that is currently configurable, but what exists is in `gluster-container-storage/values.yaml`.
54+
55+
## Deploy GCS
56+
57+
Download chart dependencies (etcd-operator):
58+
59+
```bash
60+
$ helm dependency update gluster-container-storage
61+
Hang tight while we grab the latest from your chart repositories...
62+
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
63+
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
64+
...Successfully got an update from the "stable" chart repository
65+
Update Complete. ⎈Happy Helming!
66+
Saving 1 charts
67+
Downloading etcd-operator from repo https://kubernetes-charts.storage.googleapis.com
68+
Deleting outdated charts
69+
```
70+
71+
Install GCS chart:
72+
73+
```bash
74+
$ helm --kubeconfig=../deploy/kubeconfig install --namespace gcs gluster-container-storage
75+
NAME: kindred-cricket
76+
LAST DEPLOYED: Mon Oct 1 16:12:31 2018
77+
NAMESPACE: gcs
78+
STATUS: DEPLOYED
79+
80+
RESOURCES:
81+
==> v1beta2/Deployment
82+
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
83+
kindred-cricket-etcd-operator-etcd-operator 1 1 1 0 3s
84+
85+
==> v1beta2/EtcdCluster
86+
NAME AGE
87+
etcd 3s
88+
89+
==> v1/Pod(related)
90+
NAME READY STATUS RESTARTS AGE
91+
csi-nodeplugin-glusterfsplugin-ggrg7 0/2 ContainerCreating 0 3s
92+
csi-nodeplugin-glusterfsplugin-gjx9g 0/2 ContainerCreating 0 3s
93+
csi-nodeplugin-glusterfsplugin-qv4ph 0/2 ContainerCreating 0 3s
94+
glusterd2-cluster-8zhzn 0/1 ContainerCreating 0 3s
95+
glusterd2-cluster-fghw6 0/1 ContainerCreating 0 3s
96+
glusterd2-cluster-p6d6v 0/1 ContainerCreating 0 3s
97+
kindred-cricket-etcd-operator-etcd-operator-959d989c9-jrnjw 0/1 ContainerCreating 0 2s
98+
csi-provisioner-glusterfsplugin-0 0/2 ContainerCreating 0 2s
99+
csi-attacher-glusterfsplugin-0 0/2 ContainerCreating 0 2s
100+
101+
==> v1/ServiceAccount
102+
NAME SECRETS AGE
103+
kindred-cricket-etcd-operator-etcd-operator 1 3s
104+
csi-attacher 1 3s
105+
csi-provisioner 1 3s
106+
csi-nodeplugin 1 3s
107+
108+
==> v1beta1/ClusterRole
109+
NAME AGE
110+
kindred-cricket-etcd-operator-etcd-operator 3s
111+
112+
==> v1/ClusterRole
113+
external-provisioner-runner 3s
114+
csi-nodeplugin 3s
115+
external-attacher-runner 3s
116+
117+
==> v1/DaemonSet
118+
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
119+
csi-nodeplugin-glusterfsplugin 3 3 0 3 0 <none> 3s
120+
glusterd2-cluster 3 3 0 3 0 <none> 3s
121+
122+
==> v1/StatefulSet
123+
NAME DESIRED CURRENT AGE
124+
csi-provisioner-glusterfsplugin 1 1 3s
125+
csi-attacher-glusterfsplugin 1 1 3s
126+
127+
==> v1/StorageClass
128+
NAME PROVISIONER AGE
129+
glusterfs-csi (default) org.gluster.glusterfs 3s
130+
131+
==> v1beta1/ClusterRoleBinding
132+
NAME AGE
133+
kindred-cricket-etcd-operator-etcd-operator 3s
134+
135+
==> v1/ClusterRoleBinding
136+
csi-attacher-role 3s
137+
csi-nodeplugin 3s
138+
csi-provisioner-role 3s
139+
140+
==> v1/Service
141+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
142+
gluster-mgmt ClusterIP 10.103.3.25 <none> 24007/TCP 3s
143+
```
144+
145+
It will take a few minutes for the pods to start, so check back later...
146+
147+
```bash
148+
$ kubectl -n gcs get po
149+
NAME READY STATUS RESTARTS AGE
150+
csi-attacher-glusterfsplugin-0 2/2 Running 0 2m
151+
csi-nodeplugin-glusterfsplugin-ggrg7 2/2 Running 0 2m
152+
csi-nodeplugin-glusterfsplugin-gjx9g 2/2 Running 0 2m
153+
csi-nodeplugin-glusterfsplugin-qv4ph 2/2 Running 0 2m
154+
csi-provisioner-glusterfsplugin-0 2/2 Running 0 2m
155+
etcd-cnstrrvxk8 1/1 Running 0 1m
156+
etcd-t6t5fcpqw5 1/1 Running 0 2m
157+
etcd-xhv4gkrhxx 1/1 Running 0 2m
158+
glusterd2-cluster-8zhzn 1/1 Running 0 2m
159+
glusterd2-cluster-fghw6 1/1 Running 0 2m
160+
glusterd2-cluster-p6d6v 1/1 Running 0 2m
161+
kindred-cricket-etcd-operator-etcd-operator-959d989c9-jrnjw 1/1 Running 0 2m
162+
```
163+
164+
Veryfy GD2 has a good cluster:
165+
166+
```bash
167+
$ kubectl -n gcs exec glusterd2-cluster-8zhzn glustercli peer list
168+
+--------------------------------------+-------------------------+------------------+------------------+--------+-----+
169+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
170+
+--------------------------------------+-------------------------+------------------+------------------+--------+-----+
171+
| 14e9b539-d27c-48b4-872a-143445c2c775 | glusterd2-cluster-fghw6 | 127.0.0.1:24007 | 10.44.0.9:24008 | yes | 21 |
172+
| | | 10.44.0.9:24007 | | | |
173+
| 5cb6bec2-e7ce-4e21-bbea-3727ffc694f7 | glusterd2-cluster-8zhzn | 127.0.0.1:24007 | 10.42.0.11:24008 | yes | 21 |
174+
| | | 10.42.0.11:24007 | | | |
175+
| 73121ee1-21c2-4741-a297-4eb9b532a44a | glusterd2-cluster-p6d6v | 127.0.0.1:24007 | 10.36.0.8:24008 | yes | 21 |
176+
| | | 10.36.0.8:24007 | | | |
177+
+--------------------------------------+-------------------------+------------------+------------------+--------+-----+
178+
```

0 commit comments

Comments
 (0)