You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 01-path-basics/101-start-here/readme.adoc
+7-19Lines changed: 7 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Click on the "Deploy to AWS" button and follow the CloudFormation prompts to beg
20
20
21
21
[NOTE]
22
22
AWS Cloud9 is currently available in 5 regions, and EKS is currently available in 2 regions (us-east-1 and us-west-2).
23
-
Please choose the region closest to you. If you choose a region for Cloud9 that does not support EKS, you will need to change the `AWS_DEFAULT_REGION` environment variable later.
23
+
Please choose the region closest to you. If you choose a region for Cloud9 that does not support EKS, you need to create VPC resources and change environment variables. This configuration has not been tested.
24
24
25
25
|===
26
26
@@ -29,25 +29,17 @@ Please choose the region closest to you. If you choose a region for Cloud9 that
To open the Cloud9 IDE environment, click on the "Outputs" tab in CloudFormation Console and click on the "Cloud9IDE" URL.
38
+
Accept the default stack name and Click *Next*. You can give Tags such as Key=Name, Value=k8s-workshop, and click *Next*. Make sure
39
+
to check *I acknowledge that AWS CloudFormation might create IAM resources with custom names* and click *Create*.
40
+
41
+
CloudFormation creates nested stacks and builds several resources that are required for this workshop. Wait until all the resources are created. Once the status for *k8s-workshop* changes to *CREATE_COMPLETE*,
42
+
you can open Cloud9 IDE. To open the Cloud9 IDE environment, click on the "Outputs" tab in CloudFormation Console and click on the "Cloud9IDE" URL.
@@ -82,12 +74,8 @@ To install the script, run this command in the "bash" terminal tab of the Cloud9
82
74
83
75
image:cloud9-run-script.png[Running the script in Cloud9 Terminal]
84
76
85
-
If you deployed your Cloud9 IDE in any region not supported by EKS, you will need to manually set the `AWS_DEFAULT_REGION` environment variable to a region supported by EKS:
At this point you can restart the Cloud9 IDE terminal session to ensure that the kubectl completion is enabled. Once a new terminal window is opened, type `kubectl get nodes`. You do not have to run the command. It is normal for this command to fail with an error message if you run it. You have not yet created the Kubernetes cluster. We are merely testing to make sure the `kubectl` tool is installed on the command line correctly and can autocomplete.
78
+
At this point you can restart the Cloud9 IDE terminal session to ensure that the kubectl completion is enabled. Once a new terminal window is opened, type `kubectl ver` and press `Tab` to autocomplete and press `Enter`. This will ensure that the `kubectl` tool is installed on the command line correctly and can autocomplete.
91
79
92
80
[NOTE]
93
81
All shell commands _(starting with "$")_ throughout the rest of the workshop should be run in this tab. You may want to resize it upwards to make it larger.
Copy file name to clipboardExpand all lines: 01-path-basics/102-your-first-cluster/readme.adoc
+68Lines changed: 68 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,54 @@ echo $EKS_SECURITY_GROUPS
68
68
```
69
69
If any of those environment variables are blank, please re-run the "Build Script" section of the link:../101-start-here[Cloud9 Environment Setup].
70
70
71
+
If you receive an *UnsupportedAvailabilityZoneException* error during EKS cluster creation, your account is using an AZ that is currently resource constrained. This occurs mostly in N.Virginia region (us-east-1).
72
+
73
+
```
74
+
An error occurred (UnsupportedAvailabilityZoneException) when calling the CreateCluster operation: Cannot create cluster 'k8s-workshop' because us-east-1c,
75
+
the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1b, us-east-1d
76
+
```
77
+
78
+
If you receive this error, you need to remove the constrained AZ (us-east-1c in this example) from *`EKS_SUBNET_IDS`* environment variable. Follow these steps to update your environment variable.
79
+
80
+
Save the EKS recommended AZ's that is referred in your CLI output in an environment variable.
81
+
Note: you only need two AZ's defined to create EKS cluster
In order to access the cluster locally, use a configuration file (sometimes referred to as a `kubeconfig` file). This configuration file can be created automatically.
@@ -80,6 +128,12 @@ Once the cluster has moved to the `ACTIVE` state, download and run the `create-k
80
128
81
129
This will create a configuration file at `$HOME/.kube/config` and update the necessary environment variable for default access.
82
130
131
+
You can test your kubectl configuration using 'kubectl get service'
132
+
133
+
$ kubectl get service
134
+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
135
+
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 8m
136
+
83
137
=== Create the worker nodes
84
138
85
139
Now that your EKS master nodes are created, you can launch and configure your worker nodes.
@@ -115,6 +169,20 @@ To enable worker nodes to join your cluster, download and run the `aws-auth-cm.s
115
169
Watch the status of your nodes and wait for them to reach the `Ready` status.
Copy file name to clipboardExpand all lines: 01-path-basics/103-kubernetes-concepts/readme.adoc
+18-35Lines changed: 18 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -223,10 +223,10 @@ CPU can be requested in _cpu units_. 1 cpu unit is equivalent 1 AWS vCPU. It can
223
223
224
224
===== Default memory and CPU
225
225
226
-
By default, a container in a pod is allocated no memory request/limit and 100m CPU request and no limit. This can be verified using the previously started pod:
226
+
By default, a container in a pod is not allocated any requests or limits. This can be verified using the previously started pod:
227
227
228
-
$ kubectl get pod/nginx-pod -o jsonpath={.spec.containers[].resources}
229
-
map[requests:map[cpu:100m]]
228
+
$ kubectl get pod/nginx-pod -o jsonpath={.spec.containers[].resources}
229
+
map[]
230
230
231
231
===== Assign memory and CPU
232
232
@@ -311,6 +311,10 @@ Watch the status of the Pod:
311
311
312
312
`OOMKilled` shows that the container was terminated because it ran out of memory.
313
313
314
+
To correct this, we'll need to re-create the pod with higher memory limits.
315
+
316
+
Although it may be instinctive to simply adjust the memory limit in the existing pod definition and re-apply it, Kubernetes does not currently support changing resource limits on running pods, so we'll need to first delete the existing pod, then recreate it.
317
+
314
318
In `pod-resources2.yaml`, confirm that the value of `spec.containers[].resources.limits.memory` is `300Mi`. Delete the existing Pod, and create a new one:
315
319
316
320
$ kubectl delete -f pod-resources1.yaml
@@ -331,7 +335,7 @@ Get more details about the resources allocated to the Pod:
331
335
332
336
=== Quality of service
333
337
334
-
Kubernetes opportunistically scavenge the difference between request and limit if they are not used by the Containers. This allows Kubernetes to oversubscribe nodes, which increases utilization, while at the same time maintaining resource guarantees for the containers that need guarantees.
338
+
Kubernetes opportunistically scavenges the difference between request and limit if they are not used by the Containers. This allows Kubernetes to oversubscribe nodes, which increases utilization, while at the same time maintaining resource guarantees for the containers that need guarantees.
335
339
336
340
Kubernetes assigns one of the QoS classes to the Pod:
337
341
@@ -920,7 +924,7 @@ As new nodes are added to the cluster, pods are started on them. As nodes are re
920
924
921
925
=== Create a DaemonSet
922
926
923
-
The folowing is an example DaemonSet that runs a Prometheus container. Let's begin with the template:
927
+
The following is an example DaemonSet that runs a Prometheus container. Let's begin with the template:
924
928
925
929
$ cat daemonset.yaml
926
930
apiVersion: extensions/v1beta1
@@ -1099,12 +1103,6 @@ Now, watch the job status again:
1099
1103
1100
1104
The output shows that the job was successfully executed.
1101
1105
1102
-
The completed pod is not shown in the `kubectl get pods` command. Instead it can be shown by passing an additional option as shown below:
1103
-
1104
-
$ kubectl get pods --show-all
1105
-
NAME READY STATUS RESTARTS AGE
1106
-
wait-lk49x 0/1 Completed 0 1m
1107
-
1108
1106
To delete the job, you can run this command
1109
1107
1110
1108
$ kubectl delete -f job.yaml
@@ -1184,18 +1182,7 @@ In another terminal window, watch the status of pods created:
1184
1182
wait-ngrgl 0/1 Completed 0 21s
1185
1183
wait-6l22s 0/1 Completed 0 21s
1186
1184
1187
-
After all the pods have completed, `kubectl get pods` will not show the list of completed pods. The command to show the list of pods is shown below:
1188
-
1189
-
$ kubectl get pods -a
1190
-
NAME READY STATUS RESTARTS AGE
1191
-
wait-6l22s 0/1 Completed 0 1m
1192
-
wait-f7kgb 0/1 Completed 0 2m
1193
-
wait-jbdp7 0/1 Completed 0 2m
1194
-
wait-ngrgl 0/1 Completed 0 1m
1195
-
wait-r5v8n 0/1 Completed 0 2m
1196
-
wait-smp4t 0/1 Completed 0 2m
1197
-
1198
-
Similarly, `kubectl get jobs` shows the status of the job after it has completed:
1185
+
`kubectl get jobs` shows the status of the job after it has completed:
1199
1186
1200
1187
$ kubectl get jobs
1201
1188
NAME DESIRED SUCCESSFUL AGE
@@ -1209,17 +1196,13 @@ Deleting a job deletes all the pods as well. Delete the job as:
1209
1196
1210
1197
=== Prerequisites
1211
1198
1212
-
For Kubernetes cluster versions < 1.8, Cron Job can be created with API version `batch/v2alpha1`. You can check the cluster version using this command,
Notice that the server version is at v1.7.4. In this case, you need to explicitly enable API version `batch/v2alpha1` in Kubernetes cluster and perform a rolling-update. These steps are explained in link:../cluster-install#turn-on-an-api-version-for-your-cluster[Turn on an API version for your cluster].
1199
+
For Kubernetes cluster versions < 1.8, Cron Job can be created with API version `batch/v2alpha1`. You need to explicitly enable API version `batch/v2alpha1` in Kubernetes cluster and perform a rolling-update.
1219
1200
1220
-
NOTE: Once you switch API versions, you need to perform rolling-update of the cluster which generally takes 30 - 45 mins to complete for 3 master nodes and 5 worker nodes cluster.
1201
+
If you use *Amazon EKS* for provisioning your Kubernetes cluster, your version should be >= v1.10 and you can proceed without any changes. You can check the cluster version using this command,
1221
1202
1222
-
If you have cluster version >= 1.8, `batch/v2alpha1` API is deprecated for this version but you can switch to `batch/v1beta1` to create Cron Jobs
Copy file name to clipboardExpand all lines: 03-path-application-development/303-app-update/readme.adoc
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,9 @@ For our usecase, the application initially uses the image `arungupta/app-upgrade
16
16
17
17
In order to perform exercises in this chapter, you’ll need to deploy configurations to a Kubernetes cluster. To create an EKS-based Kubernetes cluster, use the link:../../01-path-basics/102-your-first-cluster#create-a-kubernetes-cluster-with-eks[AWS CLI] (recommended). If you wish to create a Kubernetes cluster without EKS, you can instead use link:../../01-path-basics/102-your-first-cluster#alternative-create-a-kubernetes-cluster-with-kops[kops].
18
18
19
-
All configuration files for this chapter are in the `app-udpate` directory. Make sure you change to that directory before giving any commands in this chapter.
19
+
All configuration files for this chapter are in the `app-update` directory. Make sure you change to that directory before giving any commands in this chapter. If you are working in Cloud9, run:
20
+
21
+
cd ~/environment/aws-workshop-for-kubernetes/03-path-application-development/303-app-update/
0 commit comments