Skip to content

Commit fc3bc31

Browse files
committed
Updated Labs
1 parent 566e732 commit fc3bc31

File tree

18 files changed

+482
-503
lines changed

18 files changed

+482
-503
lines changed

docs/labs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
| ***Solutions*** | | |
2626
| Lab Solutions | Solutions for the Kubernetes Labs | [Solutions](kubernetes/lab-solutions.md) |
2727

28-
## Continuous Integration
28+
## DevOps
2929

3030
| Task | Description | Link |
3131
| --------------------------------| ------------------ |:----------- |
@@ -36,7 +36,7 @@
3636
| IBM Cloud DevOps | Using IBM Cloud ToolChain with Tekton | [Tekton on IBM Cloud](devops/ibm-toolchain/index.md) |
3737
| Jenkins Lab | Using Jenkins to test new versions of applications. | [Jenkins](devops/jenkins/index.md) |
3838

39-
## Continuous Deployment
39+
## GitOps
4040

4141
| Task | Description | Link |
4242
| --------------------------------| ------------------ |:----------- |
Lines changed: 41 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,42 +1,46 @@
11
---
2-
title: Kubernetes Lab 10 - Persistent Volumes
2+
title: Kubernetes Lab 10 - Network Policies
33
---
44

55
## Problem
66

7-
The death star plans can't be lost no matter what happens so we need to make sure we protect them at all costs.
8-
9-
In order to do that you will need to do the following:
10-
11-
Create a `PersistentVolume`:
12-
13-
- The PersistentVolume should be named `postgresql-pv`.
14-
15-
- The volume needs a capacity of `1Gi`.
16-
17-
- Use a storageClassName of `localdisk`.
18-
19-
- Use the accessMode `ReadWriteOnce`.
20-
21-
- Store the data locally on the node using a `hostPath` volume at the location `/mnt/data`.
22-
23-
Create a `PersistentVolumeClaim`:
24-
25-
- The PersistentVolumeClaim should be named `postgresql-pv-claim`.
26-
27-
- Set a resource request on the claim for `500Mi` of storage.
28-
29-
- Use the same storageClassName and accessModes as the PersistentVolume so that this claim can bind to the PersistentVolume.
30-
31-
Create a `Postgresql` Pod configured to use the `PersistentVolumeClaim`:
32-
- The Pod should be named `postgresql-pod`.
33-
34-
- Use the image `bitnami/postgresql`.
35-
36-
- Expose the containerPort `5432`.
37-
38-
- Set an `environment variable` called `MYSQL_ROOT_PASSWORD` with the value `password`.
39-
40-
- Add the `PersistentVolumeClaim` as a volume and mount it to the container at the path `/bitnami/postgresql/`.
41-
42-
7+
Setup minikube
8+
9+
```
10+
minikube start --network-plugin=cni
11+
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
12+
kubectl -n kube-system set env daemonset/calico-node FELIX_IGNORELOOSERPF=true
13+
kubectl -n kube-system get pods | grep calico-node
14+
```
15+
16+
Create secured pod
17+
```yaml
18+
apiVersion: v1
19+
kind: Pod
20+
metadata:
21+
name: network-policy-secure-pod
22+
labels:
23+
app: secure-app
24+
spec:
25+
containers:
26+
- name: nginx
27+
image: bitnami/nginx
28+
ports:
29+
- containerPort: 8080
30+
```
31+
32+
Create client pod
33+
```yaml
34+
apiVersion: v1
35+
kind: Pod
36+
metadata:
37+
name: network-policy-client-pod
38+
spec:
39+
containers:
40+
- name: busybox
41+
image: radial/busyboxplus:curl
42+
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
43+
```
44+
45+
Create a policy to allow only client pods with label `allow-access: "true"` to access secure pod
46+
Lines changed: 14 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,59 +1,24 @@
11
---
2-
title: Kubernetes Lab 10 - Persistent Volumes
2+
title: Kubernetes Lab 9 - Network Policies
33
---
44

55
## Solution
66

7-
```yaml
8-
apiVersion: v1
9-
kind: PersistentVolume
10-
metadata:
11-
name: postgresql-pv
12-
spec:
13-
storageClassName: localdisk
14-
capacity:
15-
storage: 1Gi
16-
accessModes:
17-
- ReadWriteOnce
18-
hostPath:
19-
path: "/mnt/data"
20-
```
21-
22-
```yaml
23-
apiVersion: v1
24-
kind: PersistentVolumeClaim
25-
metadata:
26-
name: postgresql-pv-claim
27-
spec:
28-
storageClassName: localdisk
29-
accessModes:
30-
- ReadWriteOnce
31-
resources:
32-
requests:
33-
storage: 500Mi
34-
```
357

368
```yaml
37-
apiVersion: v1
38-
kind: Pod
9+
apiVersion: networking.k8s.io/v1
10+
kind: NetworkPolicy
3911
metadata:
40-
name: postgresql-pod
12+
name: my-network-policy
4113
spec:
42-
containers:
43-
- name: postgresql
44-
image: bitnami/postgresql
45-
ports:
46-
- containerPort: 5432
47-
env:
48-
- name: MYSQL_ROOT_PASSWORD
49-
value: password
50-
volumeMounts:
51-
- name: sql-storage
52-
mountPath: /bitnami/postgresql/
53-
volumes:
54-
- name: sql-storage
55-
persistentVolumeClaim:
56-
claimName: postgresql-pv-claim
14+
podSelector:
15+
matchLabels:
16+
app: secure-app
17+
policyTypes:
18+
- Ingress
19+
ingress:
20+
- from:
21+
- podSelector:
22+
matchLabels:
23+
allow-access: "true"
5724
```
58-
59-
verify via `ls /mnt/data` on node

docs/labs/kubernetes/lab2/index.md

Lines changed: 21 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,31 @@
11
---
2-
title: Kubernetes Lab 2 - Pod Configuration
2+
title: Kubernetes Lab 2 - Probes
33
---
44

5-
## Problem
5+
### Container Health Issues
66

7-
- Create a pod definition named `yoda-service-pod.yml`, and then create a pod in the cluster using this definition to make sure it works.
7+
The first issue is caused by application instances entering an unhealthy state and responding to user requests with error messages. Unfortunately, this state does not cause the container to stop, so the Kubernetes cluster is not able to detect this state and restart the container. Luckily, the application has an internal endpoint that can be used to detect whether or not it is healthy. This endpoint is `/healthz` on port `8080`.
88

9-
The specifications are as follows:
9+
- Your first task will be to *create a probe* to check this endpoint periodically.
10+
- If the endpoint returns an **error** or **fails** to respond, the probe will detect this and the cluster will restart the container.
1011

11-
- The current image for the container is `bitnami/nginx`. You do not need a custom command or args.
12-
- There is some configuration data the container will need:
13-
- `yoda.baby.power=100000000`
14-
- `yoda.strength=10`
15-
- It will expect to find this data in a file at `/etc/yoda-service/yoda.cfg`. Store the configuration data in a ConfigMap called `yoda-service-config` and provide it to the container as a mounted volume.
16-
- The container should expect to use `64Mi` of memory and `250m` CPU (use resource requests).
17-
- The container should be limited to `128Mi` of memory and `500m` CPU (use resource limits).
18-
- The container needs access to a database password in order to authenticate with a backend database server. The password is `0penSh1ftRul3s!`. It should be stored as a Kubernetes secret called `yoda-db-password` and passed to the container as an *environment variable* called `DB_PASSWORD`.
19-
- The container will need to access the Kubernetes API using the ServiceAccount `yoda-svc`. Create the service account if it doesn't already exist, and configure the pod to use it.
12+
### Container Startup Issues
2013

21-
## Verification
14+
Another issue is caused by new pods when they are starting up. The application takes a few seconds after startup before it is ready to service requests. As a result, some users are getting error message during this brief time.
2215

23-
To verify your setup is complete, check `/etc/yoda-service` for the `yoda.cfg` file and use the `cat` command to check it's contents.
16+
- To fix this, you will need to *create another probe*. To detect whether the application is `ready`, the probe should simply make a request to the root endpoint, *`/ready`, on port `8080`*. If this request succeeds, then the application is ready.
2417

18+
- Also set a `initial delay` of `5 seconds` for the probes.
19+
20+
Here is the Pod yaml file, **add** the probes, then **create** the pod in the cluster to test it.
21+
22+
```yaml
23+
apiVersion: v1
24+
kind: Pod
25+
metadata:
26+
name: energy-shield-service
27+
spec:
28+
containers:
29+
- name: energy-shield
30+
image: ibmcase/energy-shield:1
2531
```
26-
kubectl exec -it yoda-service /bin/bash
27-
cd /etc/yoda-service
28-
cat yoda.cfg
29-
```
Lines changed: 14 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -1,65 +1,25 @@
11
---
2-
title: Kubernetes Lab 2 - Pod Configuration
2+
title: Kubernetes Lab 4 - Probes
33
---
44

5-
## Solution
6-
7-
```yaml
8-
apiVersion: v1
9-
kind: ConfigMap
10-
metadata:
11-
name: yoda-service-config
12-
data:
13-
yoda.cfg: |-
14-
yoda.baby.power=100000000
15-
yoda.strength=10
16-
```
17-
18-
```yaml
19-
apiVersion: v1
20-
kind: ServiceAccount
21-
metadata:
22-
name: yoda-svc
23-
24-
```
25-
26-
```yaml
27-
apiVersion: v1
28-
kind: Secret
29-
metadata:
30-
name: yoda-db-password
31-
stringData:
32-
password: 0penSh1ftRul3s!
33-
```
5+
## Solution
346

357
```yaml
368
apiVersion: v1
379
kind: Pod
3810
metadata:
39-
name: yoda-service
11+
name: energy-shield-service
4012
spec:
41-
serviceAccountName: yoda-svc
4213
containers:
43-
- name: yoda-service
44-
image: bitnami/nginx
45-
volumeMounts:
46-
- name: config-volume
47-
mountPath: /etc/yoda-service
48-
env:
49-
- name: DB_PASSWORD
50-
valueFrom:
51-
secretKeyRef:
52-
name: yoda-db-password
53-
key: password
54-
resources:
55-
requests:
56-
memory: "64Mi"
57-
cpu: "250m"
58-
limits:
59-
memory: "128Mi"
60-
cpu: "500m"
61-
volumes:
62-
- name: config-volume
63-
configMap:
64-
name: yoda-service-config
14+
- name: energy-shield
15+
image: ibmcase/energy-shield:1
16+
livenessProbe:
17+
httpGet:
18+
path: /healthz
19+
port: 8080
20+
readinessProbe:
21+
httpGet:
22+
path: /ready
23+
port: 8080
24+
initialDelaySeconds: 5
6525
```

0 commit comments

Comments
 (0)