You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -12,91 +12,145 @@ The custom and external metric providers, as opposed to the metrics server, are
12
12
13
13
== Prerequisites
14
14
15
-
Running Kubernetes >v1.10 in order to be able to register the External Metrics Provider resource against the API Server.
16
-
Having the Aggregation layer enabled, refer to the Kubernetes aggregation layer configuration documentation to learn how to enable it.
15
+
Running Kubernetes v1.10+ in order to be able to register the External Metrics Provider resource against the API Server.
16
+
Having the Aggregation layer enabled, refer to the https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/[Kubernetes aggregation layer] configuration documentation to learn how to enable it.
17
+
Using EKS 2.0, this will be automatically enabled for you.
17
18
18
-
== Deploy an application
19
+
This section of the workshop should be done a posteriori of 207-cluster-monitoring-with-datadog, so you can benefit from the applications in place to generate load and simulate the autoscaling.
19
20
20
-
In this step, we deploy a simple Go web application and constrain the CPU resources just for the purposes of this test.
21
+
== Walkthrough
21
22
22
-
$ kubectl run webapp --image=trevorrobertsjr/webapp --requests=cpu=50m --expose --port=8080
23
-
service "webapp" created
24
-
deployment "webapp" created
23
+
Autoscaling over External Metrics does not require the Node Agent to be running, you only need the metrics to be available in your Datadog account.
24
+
Nevertheless, for this walkthrough, we autoscale an NGINX Deployment based off of NGINX metrics, collected by a Node Agent.
25
25
26
-
It also publishes the service at port 8080.
26
+
Before proceeding, please make sure you went through the section 207 of this workshop.
27
+
This entails tat you have Node Agents running with the Autodiscovery process enabled and functional.
27
28
28
-
== Horizontal Pod Autoscaler configuration
29
+
In order to autoscale in Kubernetes, you need to register a Custom Metrics Server - The Datadog Cluster Agent implements this feature.
29
30
30
-
Now that our application is running, we create a Horizonal Pod Autoscaler for our webapp deployment.
Start by creating the appropriate RBAC rules. Allowing the cluster agent to watch and parse Horizontal Pod Autoscalers as well as cluster level metadata.
34
34
35
-
This command will mainain between 1 and 10 replicas of the pod. The autoscaler will increase or decrease the number of replicas to maintain average CPU utilization of 10% across all the pods.
clusterrole.rbac.authorization.k8s.io "dca" created
39
+
clusterrolebinding.rbac.authorization.k8s.io "dca" created
40
+
serviceaccount "dca" created
41
+
```
42
+
43
+
Add your <API_KEY> and <APP_KEY> in the link:../305-app-scaling-custom-metrics/templates/cluster-agent/cluster-agent.yaml[Deployment manifest of the Datadog Cluster Agent].
44
+
Then enable the HPA Processing by setting the `DD_EXTERNAL_METRICS_PROVIDER_ENABLED` variable to true.
Note that the first service is used for the communication between the Node Agents and the Datadog Cluster Agent, but the second is used by Kubernetes to register the External Metrics Provider.
Once the Datadog Cluster Agent is up and running, register it as an External Metrics Provider, via the service exposing the port 443.
36
72
37
-
== Generate load
73
+
Apply the following RBAC rules:
38
74
39
-
The simplest method to do this would be to access the application in an infinite loop similar to the example in the Kubernetes Horizonal Pod Autoscaler documentation:
$ kubectl attach $(kubectl get pod | grep load | awk '{print $1}') -c load-generator -i -t
102
+
Now is time to create a Horizontal Pod Autoscaler manifest. If you take a look at the link:../305-app-scaling-custom-metrics/templates/cluster-agent/hpa-manifest.yaml[hpa-manifest.yaml file], you should see:
52
103
53
-
In a different terminal window, check the status of the Horizontal Pod Autoscaler.
104
+
* The HPA is configured to autoscale the Deployment called nginx
105
+
* The maximum number of replicas created is 5 and the minimum is 1
106
+
* The metric used is nginx.net.request_per_s and the scope is kube_container_name: nginx. Note that this metric format corresponds to the Datadog one.
54
107
55
-
$ kubectl get hpa -w
108
+
Every 30 seconds (this can be configured) Kubernetes queries the Datadog Cluster Agent to get the value of this metric and autoscales proportionally if necessary. For advanced use cases, it is possible to have several metrics in the same HPA, as you can see in the https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-multiple-metrics[Kubernetes horizontal pod autoscale documentation] the largest of the proposed value will be the one chosen.
56
109
57
-
You will see output similar to the following over successive queries of the hpa resource:
110
+
We will be relying on the nginx deployment used in the section 207 of this workshop.
111
+
Make sure that everything is still running:
58
112
59
-
$ kubectl get hpa -w
60
-
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
61
-
webapp Deployment/webapp 0% / 10% 1 10 1 6m
62
-
webapp Deployment/webapp 62% / 10% 1 10 1 7m
63
-
webapp Deployment/webapp 62% / 10% 1 10 4 7m
64
-
webapp Deployment/webapp 112% / 10% 1 10 4 8m
65
-
webapp Deployment/webapp 112% / 10% 1 10 4 8m
66
-
webapp Deployment/webapp 53% / 10% 1 10 4 9m
113
+
```
114
+
kubectl get deploy, po -lapp=nginx
67
115
116
+
```
68
117
69
-
Notice that, eventually, the value in the `REPLICAS` column will increase as the load generator continues to run.
70
118
71
-
== Stop load
119
+
Then, apply the HPA manifest.
72
120
73
-
In the terminal window that is running the load generator, hit `Ctrl`+`C` to terminate the process. Again, run the `kubectl get hpa -w` command in your other terminal window, and you will see the number of replicas begin to decrease as the CPU load returns to 0%. It shows the output:
At this point, the set up is ready to be stressed. As a result of the stress Kubernetes will autoscale the NGINX pods.
143
+
144
+
TODO
145
+
146
+
147
+
Looking into your application, you should be able to correlate the requests per second on your NGINX boxes with the autoscaling event and the creation of new replicas.
0 commit comments