Skip to content
This repository was archived by the owner on Oct 31, 2019. It is now read-only.

Commit b5b9d09

Browse files
committed
Add an example app deployment with LB and volume.
1 parent f88f3a5 commit b5b9d09

File tree

3 files changed

+146
-2
lines changed

3 files changed

+146
-2
lines changed

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,9 +143,13 @@ KubeDNS is running at https://129.146.22.175:443/api/v1/proxy/namespaces/kube-sy
143143
kubernetes-dashboard is running at https://129.146.22.175:443/ui
144144
```
145145

146+
### Deploy a simple load-balanced application with shared volumes
147+
148+
Check out the [example application deployment](./docs/example-deployments.md) for a walk through of deploying a simple application that leverages both the Cloud Controller Manager and Flexvolume Driver plugins.
149+
146150
### Scale, upgrade, or delete the cluster
147151

148-
Check out the [example operations](./docs/examples.md) for details on how to use Terraform to scale, upgrade, replace, or delete your cluster.
152+
Check out the [example cluster operations](./docs/examples.md) for details on how to use Terraform to scale, upgrade, replace, or delete your cluster.
149153

150154
## Known issues and limitations
151155

docs/example-deployments.md

Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,140 @@
1+
# Example Application Deployment
2+
3+
The following example walks through running a simple Nginx web server that leverages both the Cloud Controller Manager and Flexvolume Driver plugins through Kubernetes Services, Persistent Volumes, and Persistent Volume Claims.
4+
5+
### Create an dynamic OCI Block Volume using a Kubernetes PersistentVolumeClaim
6+
7+
We'll start by creating a [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). The cluster is integrated with the OCI [Flexvolume Driver](https://github.com/oracle/oci-flexvolume-driver). As a result, creating a PVC will result in a block storage volume to (dynamically) be created in your tenancy.
8+
9+
Note that the matchLabels should contain the Availability Domain (AD) you want to provision a volume in, which should match the zone of at least one of your worker nodes:
10+
11+
```bash
12+
$ kubectl describe nodes | grep zone
13+
failure-domain.beta.kubernetes.io/zone=US-ASHBURN-AD-1
14+
failure-domain.beta.kubernetes.io/zone=US-ASHBURN-AD-2
15+
```
16+
17+
```bash
18+
$ cat nginx-pvc.yaml
19+
20+
kind: PersistentVolumeClaim
21+
apiVersion: v1
22+
metadata:
23+
name: nginx-volume
24+
spec:
25+
storageClassName: "oci"
26+
selector:
27+
matchLabels:
28+
oci-availability-domain: "US-ASHBURN-AD-1"
29+
accessModes:
30+
- ReadWriteOnce
31+
resources:
32+
requests:
33+
storage: 50Gi
34+
```
35+
36+
To add the PersistentVolumeClaim, run the following:
37+
38+
```bash
39+
$ kubectl apply -f nginx-pvc.yaml
40+
```
41+
42+
After applying the PVC, you should see a block storage volume available in your OCI tenancy.
43+
44+
```bash
45+
$ kubectl get pv,pvc
46+
```
47+
48+
### Create a Kubernetes Deployment that references the PVC
49+
50+
Now you have a PVC, you can create a Kubernetes deployment that will consume the storage:
51+
52+
```bash
53+
$ cat nginx-deployment.yaml
54+
55+
apiVersion: extensions/v1beta1
56+
kind: Deployment
57+
metadata:
58+
name: nginx
59+
spec:
60+
replicas: 2
61+
template:
62+
metadata:
63+
labels:
64+
name: nginx
65+
spec:
66+
containers:
67+
- name: nginx
68+
image: nginx
69+
imagePullPolicy: IfNotPresent
70+
ports:
71+
- containerPort: 80
72+
volumeMounts:
73+
- name: nginx-storage
74+
mountPath: "/usr/share/nginx/html"
75+
volumes:
76+
- name: nginx-storage
77+
persistentVolumeClaim:
78+
claimName: nginx-volume
79+
```
80+
81+
To run the deployment, run the following:
82+
83+
```bash
84+
$ kubectl apply -f nginx-deployment.yaml
85+
```
86+
87+
After applying the change, your pods should be scheduled on nodes running in the same AD of your volume and all have access to the shared volume:
88+
89+
```
90+
$ kubectl get pods -o wide
91+
NAME READY STATUS RESTARTS AGE IP NODE
92+
nginx-r1 1/1 Running 0 35s 10.99.46.4 k8s-worker-ad1-0.k8sworkerad1.k8soci.oraclevcn.com
93+
nginx-r2 1/1 Running 0 35s 10.99.46.5 k8s-worker-ad1-0.k8sworkerad1.k8soci.oraclevcn.com
94+
```
95+
96+
```
97+
$ kubectl exec nginx-r1 touch /usr/share/nginx/html/test
98+
```
99+
100+
```
101+
$ kubectl exec nginx-r2 ls /usr/share/nginx/html
102+
test
103+
lost+found
104+
```
105+
106+
### Expose the app using the Cloud Controller Manager
107+
108+
The cluster is integrated with the OCI [Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) (CCM). As a result, creating a service of type `--type=LoadBalancer` will expose the pods to the Internet using an OCI Load Balancer.
109+
110+
```bash
111+
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
112+
```
113+
114+
List service to get the external IP address (OCI LoadBalancer) of your exposed service. Note, the IP will be listed as `<pending>` while the load balancer is being provisioned:
115+
116+
```bash
117+
$ kubectl get service nginx
118+
```
119+
120+
Access the Nginx service
121+
122+
```
123+
open http://<EXTERNAL-IP>:80
124+
```
125+
126+
### Clean up
127+
128+
Clean up the container, OCI Load Balancer, and Block Volume by deleting the deployment, service, and persistent volume claim:
129+
130+
```bash
131+
$ kubectl delete service nginx
132+
```
133+
134+
```bash
135+
$ kubectl delete -f nginx-deployment.yaml
136+
```
137+
138+
```bash
139+
$ kubectl delete -f nginx-pvc.yaml
140+
```

docs/examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Example Operations
1+
# Example Installer Operations
22

33
## Deploying a new cluster using terraform apply
44

0 commit comments

Comments
 (0)