Skip to content

Commit b84847e

Browse files
authored
Merge pull request #2 from aws-samples/master
Merge from upstream
2 parents 5565d75 + 4c2bc87 commit b84847e

File tree

37 files changed

+1115
-86
lines changed

37 files changed

+1115
-86
lines changed

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,7 @@
33
**/target
44
**/.idea
55
**/dependency-reduced-pom.xml
6-
6+
**/journal.json
7+
**/chaostoolkit.log
8+
03-path-application-development/306-app-management-with-helm/sample/*.tgz
9+
03-path-application-development/309-deploying-a-chart-repository/sample/*.tgz

01-path-basics/101-start-here/readme.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ image:cloud9-run-script.png[Running the script in Cloud9 Terminal]
7777
[NOTE]
7878
All shell commands _(starting with "$")_ throughout the rest of the workshop should be run in this tab. You may want to resize it upwards to make it larger.
7979

80-
At this point you can restart the Cloud9 IDE terminal session to ensure that the kublectl completion is enabled. Once a new terminal window is opened, type $ kubectl get nodes.
80+
At this point you can restart the Cloud9 IDE terminal session to ensure that the kublectl completion is enabled. Once a new terminal window is opened, type `kubectl get nodes`. You do not have to run the command. It is normal for this command to fail with an error message if you run it. You have not yet created the Kubernetes cluster. We are merely testing to make sure the `kubectl` tool is installed on the command line correctly and can autocomplete.
8181

8282
One last step is required so that the Cloud9 IDE uses the assigned IAM Instance profile. Open the "AWS Cloud9" menu, go to "Preferences", go to "AWS Settings", and disable "AWS managed temporary credentials" as depicted in the diagram here:
8383

01-path-basics/102-your-first-cluster/readme.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ This section will walk you through how to install a Kubernetes cluster on AWS us
88

99
https://github.com/kubernetes/kops[kops, window="_blank"], short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters. kops can also perform rolling upgrades from older versions of Kubernetes to newer ones, and manage the cluster add-ons.
1010
11-
If you have not set up the link:../101-start-here[Cloud9 Devlopment Environment, window="_blank"] yet, please do so before continuing.
11+
If you have not set up the link:../101-start-here[Cloud9 Development Environment, window="_blank"] yet, please do so before continuing.
1212
1313
== Create a Kubernetes Cluster with kops
1414
1515
kops can be used to create a highly available cluster, with multiple master and worker nodes spread across multiple availability zones.
16-
The master and worker nodes within the cluster can use either DNS or the https://github.com/weaveworks/mesh[Weave Mesh, window="_blank"] *gossip* protocol for name resolution. For this workshop, we will use the gossip protocol. A gossip-based cluster is easier and quicker to setup, and does not does not require a domain, subdomain, or Route53 hosted zone to be registered. Instructions for creating a DNS-based cluster are provided as an appendix at the bottom of this page.
16+
The master and worker nodes within the cluster can use either DNS or the https://github.com/weaveworks/mesh[Weave Mesh, window="_blank"] *gossip* protocol for name resolution. For this workshop, we will use the gossip protocol. A gossip-based cluster is easier and quicker to setup, and does not require a domain, subdomain, or Route53 hosted zone to be registered. Instructions for creating a DNS-based cluster are provided as an appendix at the bottom of this page.
1717
1818
To create a cluster using the gossip protocol, simply use a cluster name with a suffix of `.k8s.local`. In the following steps, we will use `example.cluster.k8s.local` as a sample gossip cluster name. You may choose a different name as long as it ends with `.k8s.local`.
1919
@@ -248,6 +248,7 @@ Save the changes and exit the editor. Kubernetes cluster needs to re-read the co
248248

249249
NOTE: This process can easily take 30-45 minutes. Its recommended to leave the cluster without any updates during that time.
250250

251+
$ kops update cluster --yes
251252
$ kops rolling-update cluster --yes
252253
Using cluster from kubectl context: example.cluster.k8s.local
253254

01-path-basics/103-kubernetes-concepts/readme.adoc

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ Again, the exact output may vary but your results should look similar to these.
158158

159159
Logs from the pod can be obtained (a fresh nginx does not have logs - check again later once you have accessed the service):
160160

161-
$ kubectl logs <pod-name>
161+
$ kubectl logs <pod-name> --namespace <namespace-name>
162162

163163
=== Execute a shell on the running pod
164164

@@ -195,7 +195,7 @@ Each resource in Kubernetes can be defined using a configuration file. For examp
195195
name: nginx-pod
196196
spec:
197197
containers:
198-
- name: nginx
198+
- name: nginx
199199
image: nginx:latest
200200
ports:
201201
- containerPort: 80
@@ -369,7 +369,7 @@ Kubernetes assigns one of the QoS classes to the Pod:
369369

370370
QoS class is used by Kubernetes for scheduling and evicting Pods.
371371

372-
When every Container in a Pod is given a memory and CPU limit, and optionally non-zero request, and they exactly match, then a Pod is scheduled with `Guaranteed` QoS. This is the higest priority.
372+
When every Container in a Pod is given a memory and CPU limit, and optionally non-zero request, and they exactly match, then a Pod is scheduled with `Guaranteed` QoS. This is the highest priority.
373373

374374
A Pod is given `Burstable` QoS class if the Pod does not meet the `Guaranteed` QoS and at least one Container has a memory or CPU request. This is intermediate priority.
375375

@@ -861,7 +861,7 @@ Run the following command to create the service:
861861
Get more details about the service:
862862

863863
```
864-
$ kubectl get svc
864+
$ kubectl get service
865865
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
866866
echo-service LoadBalancer 100.66.161.199 ad0b47976b7fe... 80:30125/TCP 40s
867867
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 1h
@@ -1245,7 +1245,7 @@ A Cron Job is a job that runs on a given schedule, written in Cron format. There
12451245
Here is the job specification:
12461246

12471247
$ cat cronjob.yaml
1248-
apiVersion: batch/v2alpha1
1248+
apiVersion: batch/v1beta1
12491249
kind: CronJob
12501250
metadata:
12511251
name: hello
@@ -1399,7 +1399,7 @@ No resource limits.
13991399
+
14001400
. Create a Deployment in this new Namespace using a configuration file:
14011401
+
1402-
$ deployment-namespace.yaml
1402+
$ cat deployment-namespace.yaml
14031403
apiVersion: extensions/v1beta1
14041404
kind: Deployment
14051405
metadata:
@@ -1446,7 +1446,7 @@ Alternatively, a namespace can be created using `kubectl` as well.
14461446

14471447
. Create a Deployment:
14481448

1449-
$ kubectl -n dev2 apply -f templates/deployment.yaml
1449+
$ kubectl -n dev2 apply -f deployment.yaml
14501450
deployment "nginx-deployment-ns" created
14511451

14521452
. Get Deployments in the newly created Namespace:
@@ -1634,6 +1634,8 @@ Note, how CPU and memory resources have incremented values.
16341634

16351635
https://github.com/kubernetes/kubernetes/issues/55433[kubernetes#55433] provide more details on how an explicit CPU resource is not needed to create a Pod with ResourceQuota.
16361636

1637+
$ kubectl delete quota/quota
1638+
$ kubectl delete quota/quota2
16371639

16381640
You are now ready to continue on with the workshop!
16391641

01-path-basics/103-kubernetes-concepts/templates/cronjob.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
apiVersion: batch/v2alpha1
1+
apiVersion: batch/v1beta1
22
kind: CronJob
33
metadata:
44
name: hello

02-path-working-with-clusters/201-cluster-monitoring/readme.adoc

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,13 @@ If you are using v1.8 or above, deploy the Dashboard using the following command
4747

4848
Dashboard can be seen using the following command:
4949

50-
kubectl proxy
50+
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' --port 8080
5151

52-
Now, Dashboard is accessible at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
52+
Now, Dashboard is accessible via `Preview`, `Preview Running Application` as:
53+
54+
https://ENVIRONMENT_ID.vfs.cloud9.REGION_ID.amazonaws.com/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
55+
56+
Where `ENVIRONMENT_ID` is your Cloud9 IDE environment id (you should see it once click the built-in browser address bar) and `REGION_ID` is AWS region id (e.g. us-east-1).
5357

5458
Starting with Kubernetes 1.7, Dashboard supports authentication. Read more about it at https://github.com/kubernetes/dashboard/wiki/Access-control#introduction. We'll use a bearer token for authentication.
5559

@@ -122,7 +126,7 @@ Click on `Nodes` to see a textual representation about the nodes running in the
122126
123127
image::monitoring-nodes-before.png[]
124128
125-
Install a Java application as explained in link:../helm[Deploying applications using Kubernetes Helm charts].
129+
Install a Java application as explained in link:../../03-path-application-development/306-app-management-with-helm[Deploying applications using Kubernetes Helm charts].
126130
127131
Click on `Pods`, again to see a textual representation about the pods running in the cluster:
128132
@@ -152,9 +156,9 @@ Execute this command to install Heapster, InfluxDB and Grafana:
152156
deployment "monitoring-influxdb" created
153157
service "monitoring-influxdb" created
154158
155-
Heapster is now aggregating metrics from the cAdvisor instances running on each node. This data is stored in an InfluxDB instance running in the cluster. Grafana dashboard, accessible at http://localhost:8001/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1, now shows the information about the cluster.
159+
Heapster is now aggregating metrics from the cAdvisor instances running on each node. This data is stored in an InfluxDB instance running in the cluster. Grafana dashboard, accessible at https://ENVIRONMENT_ID.vfs.cloud9.REGION_ID.amazonaws.com/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1, now shows the information about the cluster.
156160
157-
NOTE: Grafana dashboard will not be available if Kubernetes proxy is not running. If proxy is not running, it can be started with the command `kubectl proxy`.
161+
NOTE: Grafana dashboard will not be available if Kubernetes proxy is not running. If proxy is not running, it can be started with the command `kubectl proxy --address 0.0.0.0 --accept-hosts '.*' --port 8080`.
158162
159163
=== Grafana dashboard
160164

02-path-working-with-clusters/202-service-mesh/readme.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,6 +158,10 @@ If you haven't already deployed the "`hello-world`" application, deploy it now.
158158
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\
159159
k8s-daemonset/k8s/hello-world.yml
160160

161+
Delete the previous linkerd Daemonset, as we're going to update the ConfigMap and install a new one:
162+
163+
$ kubectl delete ds/l5d
164+
161165
Deploy the linkerd ingress so we can access the application externally.
162166

163167
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\
@@ -239,7 +243,7 @@ you'll need to download Istio. Istio can also automatically inject the sidecar;
239243
https://istio.io/docs/setup/kubernetes/quick-start.html[Istio quick start]
240244

241245
curl -L https://git.io/getLatestIstio | sh -
242-
cd istio-0.2.10
246+
cd istio-*
243247
export PATH=$PWD/bin:$PATH
244248

245249
You should now be able to run the `istioctl` CLI

02-path-working-with-clusters/203-cluster-upgrades/readme.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ with 1.6.10 version and perform an automatic rolling upgrade to 1.7.2 using kops
1414

1515
=== Create cluster
1616

17-
Review steps to create link:../prereqs.adoc[prereqs] nad make sure `KOPS_STATE_STORE` and `AWS_AVAILABILITY_ZONES` environment variables are set. More details about creating a cluster are at link:../cluster-install[Create Kubernetes cluster using kops].
17+
Review steps to create link:../prereqs.adoc[prereqs] and make sure `KOPS_STATE_STORE` and `AWS_AVAILABILITY_ZONES` environment variables are set. More details about creating a cluster are at link:../cluster-install[Create Kubernetes cluster using kops].
1818

1919
In this chapter, we'll create a Kubernetes 1.6.10 version cluster as shown:
2020

02-path-working-with-clusters/204-cluster-logging-with-EFK/readme.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ In terms of architecture, Fluentd is deployed as a DaemonSet with the CloudWatch
1818

1919
This chapter uses a cluster with 3 master nodes and 5 worker nodes as described here: link:../cluster-install#multi-master-multi-node-multi-az-gossip-based-cluster[multi-master, multi-node gossip based cluster].
2020

21-
All configuration files for this chapter are in the `cluster-logging` directory. Make sure you change to that directory before giving any commands in this chapter.
21+
All configuration files for this chapter are in the `02-path-working-with-clusters/204-cluster-logging-with-EFK` directory. Make sure you change to that directory before giving any commands in this chapter.
2222

2323
== Provision an Amazon Elasticsearch cluster
2424

02-path-working-with-clusters/204-cluster-logging-with-EFK/templates/fluentd-ds.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ spec:
1919
spec:
2020
containers:
2121
- name: fluentd
22-
image: quay.io/coreos/fluentd-kubernetes:v0.12-cloudwatch
22+
image: quay.io/coreos/fluentd-kubernetes:v0.12.33-cloudwatch
2323
imagePullPolicy: Always
2424
command: ["fluentd", "-c", "/fluentd/etc/fluentd.conf", "-p", "/fluentd/plugins"]
2525
env:

0 commit comments

Comments
 (0)