Skip to content

Commit 11d6dc6

Browse files
authored
Merge pull request #177 from Lemoncode/eks-review
Eks review
2 parents b1df2cd + 76fe4c2 commit 11d6dc6

File tree

11 files changed

+165
-118
lines changed

11 files changed

+165
-118
lines changed

04-cloud/01-eks/01-create-aws-user/readme.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,15 @@ Para poder crear un usuario con permisos de administrador previamente debemos cr
1717
## Creando un grupo
1818

1919
```bash
20-
$ aws iam create-group --group-name <group-name>
20+
aws iam create-group --group-name <group-name>
2121
```
2222

2323
> Contsraits: The name can consist of letters, digits, and the following characters: plus (+), equal (=), comma (,), period (.), at (@), underscore (_), and hyphen (-). The name is not case sensitive and can be a maximum of 128 characters in length.
2424
2525
Para verificar que hemos tenido exito en nuestra operación
2626

2727
```bash
28-
$ aws iam list-groups
28+
aws iam list-groups
2929
```
3030

3131
La respuesta incluye el `Amazon Resource Name` (ARN) para el nuevo grupo. El `ARN` es un standard que Amazon utiliza para identificar recursos.
@@ -35,13 +35,13 @@ La respuesta incluye el `Amazon Resource Name` (ARN) para el nuevo grupo. El `AR
3535
Utilizando el siguiente comando enlazamos la política de administardor con el grupo recientemenet creado
3636

3737
```bash
38-
$ aws iam attach-group-policy --group-name <group-name> --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
38+
aws iam attach-group-policy --group-name <group-name> --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
3939
```
4040

4141
Para verificar que la política se ha atado correctamente al grupo
4242

4343
```bash
44-
$ aws iam list-attached-group-policies --group-name <group-name>
44+
aws iam list-attached-group-policies --group-name <group-name>
4545
```
4646

4747
La respuesta nos da la lista de políticas atadas al grupo. Si queremos comprobar los contenidos de una política en particular podemos usar `aws iam get-policy`
@@ -51,13 +51,13 @@ La respuesta nos da la lista de políticas atadas al grupo. Si queremos comproba
5151
### 1. Creamos un usuario
5252

5353
```bash
54-
$ aws iam create-user --user-name eksAdmin
54+
aws iam create-user --user-name eksAdmin
5555
```
5656

5757
### 2. Añadiendo el Usuaro a un grupo
5858

5959
```bash
60-
$ aws iam add-user-to-group --group-name <group-name> --user-name <user-name>
60+
aws iam add-user-to-group --group-name <group-name> --user-name <user-name>
6161
```
6262

6363

@@ -70,13 +70,13 @@ https://My_AWS_Account_ID.signin.aws.amazon.com/console/
7070
```
7171

7272
```bash
73-
$ aws iam create-login-profile --generate-cli-skeleton > create-login-profile.json
73+
aws iam create-login-profile --generate-cli-skeleton > create-login-profile.json
7474
```
7575

7676
Genera un `template`, que ahora podemos utilizar para inicializar el usuario
7777

7878
```bash
79-
$ aws iam create-login-profile --cli-input-json file://create-login-profile.json
79+
aws iam create-login-profile --cli-input-json file://create-login-profile.json
8080
```
8181

8282
Esto nos da como salida
@@ -102,7 +102,7 @@ google https://xxxxxxxxxxxx.signin.aws.amazon.com/console/
102102
Con esta `key` nuestro nuevo usuario tendrá acceso programático desde `AWS CLI`
103103

104104
```bash
105-
$ aws iam create-access-key --user-name <user-name>
105+
aws iam create-access-key --user-name <user-name>
106106
```
107107

108108
Con la salida anterior podemos configurar nuestro usurio por defecto usando `aws configute`

04-cloud/01-eks/02-launching-cluster-eks/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ chmod 400 EksKeyPair.pem
2727
With this new private key we can go ahead and generate a public one, that's the key that will be upload into the node (EC2 instance). If we provide this key, and we have the private one, we can connect to the remote instance.
2828

2929
```bash
30-
$ ssh-keygen -y -f EksKeyPair.pem > eks_key.pub
30+
ssh-keygen -y -f EksKeyPair.pem > eks_key.pub
3131
```
3232

3333
## Create definition YAML
@@ -132,7 +132,7 @@ eksctl create cluster -f demos.yml
132132
Now we can test that our cluster is up and running.
133133

134134
```bash
135-
$ kubectl get nodes
135+
kubectl get nodes
136136
```
137137

138138
> `eksctl` has edit `./kube/config` to make `kubectl` point to the new created cluster.

04-cloud/01-eks/03-deploy-k8s-dashboard/readme.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,15 @@ We can deploy the dashboard with the following command:
88

99
```bash
1010
export DASHBOARD_VERSION="v2.0.0"
11+
```
12+
13+
```bash
14+
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml
15+
```
1116

12-
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml
17+
We get the following output:
1318

19+
```
1420
namespace/kubernetes-dashboard created
1521
serviceaccount/kubernetes-dashboard created
1622
service/kubernetes-dashboard created
@@ -30,7 +36,10 @@ deployment.apps/dashboard-metrics-scraper created
3036
If we have a look into our services we can find
3137

3238
```bash
33-
kubectl get services --all-namespaces
39+
kubectl get services --all-namespaces
40+
```
41+
42+
```
3443
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3544
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 3h15m
3645
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 3h15m
@@ -43,7 +52,7 @@ kubernetes-dashboard kubernetes-dashboard ClusterIP 10.100.9.10 <n
4352
Since this is deployed to our private cluster, we need to access it via a proxy. `kube-proxy` is available to proxy our requests to the dashboard service. In your workspace, run the following command:
4453

4554
```bash
46-
$ kubectl proxy --port=8080 --address=0.0.0.0 --disable-filter=true &
55+
kubectl proxy --port=8080 --address=0.0.0.0 --disable-filter=true &
4756
```
4857

4958
> Running from a local environment is enough to do `kubectl proxy --port=8080` (or any other port that we want to use)
@@ -61,7 +70,7 @@ google localhost:8080/api/v1/namespaces/kubernetes-dashboard/services/https:kube
6170
To access the dashboard we have to provide a `token`, we can achive this by running the following:
6271

6372
```bash
64-
$ aws eks get-token --cluster-name lc-cluster | jq -r '.status.token'
73+
aws eks get-token --cluster-name lc-cluster | jq -r '.status.token'
6574
```
6675

6776
Copy the output of this command and then click the radio button next to Token then in the text field below paste the output from the last command

04-cloud/01-eks/04-deploy-solution/readme.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -245,9 +245,9 @@ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam
245245
Let's bring up the Frontend
246246

247247
```bash
248-
$ cd lc-front
249-
$ kubectl apply -f kubernetes/deployment.yaml
250-
$ kubectl apply -f kubernetes/service.yaml
248+
cd lc-front
249+
kubectl apply -f kubernetes/deployment.yaml
250+
kubectl apply -f kubernetes/service.yaml
251251
```
252252

253253
We can check the progress by looking at the deployment status:
@@ -267,21 +267,21 @@ Be aware AL is L7 on OSI model, and only spports http and https, some kind of co
267267
Now that we have a running service that is `type: LoadBalancer` we need to find the ELB's address.
268268

269269
```bash
270-
$ kubectl get service lc-front
270+
kubectl get service lc-front
271271
```
272272

273273
or
274274

275275
```bash
276-
$ kubectl get service lc-front -o wide
276+
kubectl get service lc-front -o wide
277277
```
278278

279279
If we want to use the data programmatically, we can also output via json
280280

281281
```bash
282-
$ ELB=$(kubectl get service lc-front -o json | jq -r '.status.loadBalancer.ingress[].hostname')
282+
ELB=$(kubectl get service lc-front -o json | jq -r '.status.loadBalancer.ingress[].hostname')
283283
284-
$ curl -m3 -v $ELB
284+
curl -m3 -v $ELB
285285
```
286286

287287
> NOTE: It will take several minutes for ELB to become healthy and start passing traffic to the frontend pods.
@@ -291,14 +291,14 @@ $ curl -m3 -v $ELB
291291
When we launched our services, we only launched one container of each. We can confirm this by viewing the running pods:
292292

293293
```bash
294-
$ kubectl get deployments
294+
kubectl get deployments
295295
```
296296

297297
Now let's scale up the backend services:
298298

299299
```bash
300-
$ kubectl scale deployment lc-age-service --replicas=3
301-
$ kubectl scale deployment lc-name-service --replicas=3
300+
kubectl scale deployment lc-age-service --replicas=3
301+
kubectl scale deployment lc-name-service --replicas=3
302302
```
303303

304304
Confirm by looking at deployments again

04-cloud/01-eks/05-helm/01-install-helm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,13 @@
1717
Before we can get started configuring Helm, we'll need to first install the command line tools.
1818

1919
```bash
20-
$ curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
20+
curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
2121
```
2222

2323
We can verify the installation by
2424

2525
```bash
26-
$ helm version --short
26+
helm version --short
2727
```
2828

2929
Let’s configure our first Chart repository. Chart repositories are similar to APT or yum repositories that you might be familiar with on Linux, or Taps for Homebrew on macOS.
@@ -37,5 +37,5 @@ helm repo add stable https://charts.helm.sh/stable
3737
Once this is installed, we will be able to list the charts you can install:
3838

3939
```bash
40-
$ helm search repo stable
40+
helm search repo stable
4141
```

04-cloud/01-eks/05-helm/02-deploy-nginx-with-helm.md

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -26,13 +26,13 @@ Now that our repository Chart list has been updated, we can [search for Charts](
2626
To list all Charts:
2727

2828
```bash
29-
$ helm search repo
29+
helm search repo
3030
```
3131

3232
You can see from the output that it dumped the list of all Charts we have added. In some cases that may be useful, but an even more useful search would involve a keyword argument. So next, we’ll search just for nginx:
3333

3434
```bash
35-
$ helm search repo nginx
35+
helm search repo nginx
3636
```
3737

3838
The results in:
@@ -57,7 +57,7 @@ After a quick web search, we discover that there is a Chart for the nginx standa
5757

5858

5959
```bash
60-
$ helm repo add bitnami https://charts.bitnami.com/bitnami
60+
helm repo add bitnami https://charts.bitnami.com/bitnami
6161
```
6262

6363
Once that completes, we can search all Bitnami Charts:
@@ -70,7 +70,7 @@ helm search repo bitnami
7070
Search once again for nginx
7171

7272
```bash
73-
$ helm search repo nginx
73+
helm search repo nginx
7474
```
7575

7676
Now we are seeing more nginx options, across both repositories:
@@ -84,7 +84,7 @@ stable/nginx-ingress 1.41.3 v0.34.1 DEPRECAT
8484
Or even search the Bitnami repo, just for nginx:
8585

8686
```bash
87-
$ helm search repo bitnami/nginx
87+
helm search repo bitnami/nginx
8888

8989
```
9090

@@ -97,13 +97,13 @@ A Helm Chart can be installed multiple times inside a Kubernetes cluster. This i
9797
For this reason, you must supply a unique name for the installation, or ask Helm to generate a name for you.
9898

9999
```bash
100-
$ helm install mywebserver bitnami/nginx --dry-run
100+
helm install mywebserver bitnami/nginx --dry-run
101101
```
102102

103103
Now to really install nginx on our cluster, we can run:
104104

105105
```bash
106-
$ helm install mywebserver bitnami/nginx
106+
helm install mywebserver bitnami/nginx
107107
```
108108

109109
The output is simillar to this
@@ -137,8 +137,12 @@ To access NGINX from outside the cluster, follow the steps below:
137137
In order to review the underlying Kubernetes services, pods and deployments, run:
138138
139139
```bash
140-
$ kubectl get svc,po,deploy
140+
kubectl get svc,po,deploy
141+
```
142+
143+
We get something similar to this:
141144
145+
```
142146
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
143147
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 136m
144148
service/mywebserver-nginx LoadBalancer 10.100.9.232 a7130a0207757453594c4cb5bdf072e5-381544302.eu-west-3.elb.amazonaws.com 80:31519/TCP 2m38s
@@ -155,28 +159,33 @@ The first object shown in this output is a Deployment. A Deployment object manag
155159
You can inspect this Deployment object in more detail by running the following command:
156160
157161
```bash
158-
$ kubectl describe deployment mywebserver
159-
162+
kubectl describe deployment mywebserver
160163
```
161164
162165
The next object shown created by the Chart is a Pod. A Pod is a group of one or more containers.
163166
164167
To verify the Pod object was successfully deployed, we can run the following command:
165168
166169
```bash
167-
$ kubectl get pods -l app.kubernetes.io/name=nginx
170+
kubectl get pods -l app.kubernetes.io/name=nginx
171+
```
168172
173+
We can check that the container inside the pod is running:
174+
175+
```
169176
NAME READY STATUS RESTARTS AGE
170177
mywebserver-nginx-857766d4fd-9tdwf 1/1 Running 0 4m48s
171-
172178
```
179+
173180
The third object that this Chart creates for us is a Service. A Service enables us to contact this nginx web server from the Internet, via an Elastic Load Balancer (ELB).
174181
175182
To get the complete URL of this Service, run:
176183
177184
```bash
178-
$ kubectl get service mywebserver-nginx -o wide
185+
kubectl get service mywebserver-nginx -o wide
186+
```
179187
188+
```
180189
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
181190
mywebserver-nginx LoadBalancer 10.100.9.232 a7130a0207757453594c4cb5bdf072e5-381544302.eu-west-3.elb.amazonaws.com 80:31519/TCP 6m22s app.kubernetes.io/instance=mywebserver,app.kubernetes.io/name=nginx
182191
```
@@ -190,7 +199,10 @@ To remove all the objects that the Helm Chart create we can use [helm uninstall]
190199
Before we uninstall our application, we can verify what we have running via the [helm list](https://helm.sh/docs/helm/helm_list/) command:
191200
192201
```bash
193-
$ helm list
202+
helm list
203+
```
204+
205+
```
194206
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/lemoncode/.kube/config
195207
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/lemoncode/.kube/config
196208
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
@@ -200,7 +212,7 @@ mywebserver default 1 2020-12-21 15:45:05.835403883 +0
200212
To uninstall:
201213
202214
```bash
203-
$ helm uninstall mywebserver
215+
helm uninstall mywebserver
204216
205217
```
206218

04-cloud/01-eks/06-autoscalling-our-applications/00-install-kube-ops-view.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@ helm install kube-ops-view \
1313
stable/kube-ops-view \
1414
--set service.type=LoadBalancer \
1515
--set rbac.create=True
16-
1716
```
1817

1918
The execution above installs kube-ops-view exposing it through a Service using the LoadBalancer type. A successful execution of the command will display the set of resources created and will prompt some advice asking you to use `kubectl proxy` and a local URL for the service. Given we are using the type LoadBalancer for our service, we can disregard this; Instead we will point our browser to the external load balancer.
@@ -24,6 +23,9 @@ To check the chart was installed successfully:
2423

2524
```bash
2625
helm list
26+
```
27+
28+
```
2729
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/lemoncode/.kube/config
2830
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/lemoncode/.kube/config
2931
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
@@ -34,9 +36,22 @@ With this we can explore kube-ops-view output by checking the details about the
3436

3537
```bash
3638
kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'
37-
3839
```
3940

4041
This will display a line similar to Kube-ops-view URL = http://<URL_PREFIX_ELB>.amazonaws.com Opening the URL in your browser will provide the current state of our cluster.
4142

42-
> Reference: https://kubernetes-operational-view.readthedocs.io/en/latest/
43+
> Reference: https://kubernetes-operational-view.readthedocs.io/en/latest/
44+
45+
## Alternative installation
46+
47+
```bash
48+
helm repo add christianknell https://christianknell.github.io/helm-charts
49+
helm repo update
50+
```
51+
52+
```bash
53+
helm install kube-ops-view \
54+
christianknell/kube-ops-view \
55+
--set service.type=LoadBalancer
56+
```
57+

0 commit comments

Comments
 (0)