Skip to content

Commit f1c6db0

Browse files
authored
Merge pull request #2350 from pareenaverma/content_review
Kedify tech review
2 parents e9bf8da + cde4f10 commit f1c6db0

File tree

5 files changed

+73
-69
lines changed

5 files changed

+73
-69
lines changed

assets/contributors.csv

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,3 +103,4 @@ Rui Chang,,,,,
103103
Alejandro Martinez Vicente,Arm,,,,
104104
Mohamad Najem,Arm,,,,
105105
Zenon Zhilong Xiu,Arm,,zenon-zhilong-xiu-491bb398,,
106+
Zbynek Roubalik,Kedify,,,,

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/_index.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,7 @@ cascade:
77

88
minutes_to_complete: 45
99

10-
who_is_this_for: >
11-
Developers and SREs running HTTP-based workloads on Kubernetes who want to enable intelligent, event-driven autoscaling.
10+
who_is_this_for: This is an introductory topic for developers running HTTP-based workloads on Kubernetes who want to enable event-driven autoscaling.
1211

1312
learning_objectives:
1413
- Install Kedify (KEDA build, HTTP Scaler, and Kedify Agent) via Helm
@@ -18,14 +17,13 @@ learning_objectives:
1817
prerequisites:
1918
- A running Kubernetes cluster (local or cloud)
2019
- kubectl and helm installed locally
21-
- Access to the Kedify Service dashboard (https://dashboard.kedify.io/) to obtain Organization ID and API Key log in or create an account if you don’t have one
20+
- Access to the Kedify Service dashboard (https://dashboard.kedify.io/) to obtain Organization ID and API Key. You can log in or create an account if you don’t have one
2221

2322
author: Zbynek Roubalik
2423

2524
### Tags
2625
skilllevels: Introductory
2726
subjects: Containers and Virtualization
28-
cloud_service_providers: Any
2927
armips:
3028
- Neoverse
3129
operatingsystems:

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md

Lines changed: 48 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -4,49 +4,48 @@ weight: 4
44
layout: "learningpathall"
55
---
66

7-
Use this section to get a quick, hands-on feel for Kedify HTTP autoscaling. We’ll deploy a small web service, expose it through a standard Kubernetes Ingress, and rely on Kedify’s autowiring to route traffic via its proxy so requests are measured and drive scaling.
7+
In this section, you’ll gain hands-on experience with Kedify HTTP autoscaling. You will deploy a small web service, expose it through a standard Kubernetes Ingress, and rely on Kedify’s autowiring to route traffic via its proxy so requests are measured and drive scaling.
88

9-
Scale a real HTTP app exposed through Kubernetes Ingress using Kedify’s [kedify-http](https://docs.kedify.io/scalers/http-scaler/) scaler. You will deploy a simple app, enable autoscaling with a [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/), generate load, and observe the system scale out and back in (including scale-to-zero when idle).
9+
You will scale a real HTTP app exposed through Kubernetes Ingress using Kedify’s [kedify-http](https://docs.kedify.io/scalers/http-scaler/) scaler. You will deploy a simple application, enable autoscaling with a [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/), generate load, and observe the system scale out and back in (including scale-to-zero when idle).
1010

1111
## How it works
1212

1313
With ingress autowiring enabled, Kedify automatically routes traffic through its proxy before it reaches your Service/Deployment:
1414

15-
```
15+
```output
1616
Ingress → kedify-proxy → Service → Deployment
1717
```
1818

1919
The [Kedify Proxy](https://docs.kedify.io/scalers/http-scaler/#kedify-proxy) gathers request metrics used by the scaler to make decisions.
2020

21-
## What you’ll deploy
22-
23-
- Deployment & Service: an HTTP server with a small response delay to simulate work
24-
- Ingress: public entry using host `application.keda`
25-
- ScaledObject: Kedify HTTP scaler with `trafficAutowire: ingress`
21+
## Deployment Overview
22+
* Deployment & Service: An HTTP server with a small response delay to simulate work
23+
* Ingress: Public entry point configured using host `application.keda`
24+
* ScaledObject: A Kedify HTTP scaler using `trafficAutowire: ingress`
2625

27-
## Step 0Set up Ingress IP environment variable
26+
## Step 1Configure the Ingress IP environment variable
2827

29-
Before testing the application, ensure you have the `INGRESS_IP` environment variable set with your ingress controller's external IP or hostname.
28+
Before testing the application, make sure the INGRESS_IP environment variable is set to your ingress controllers external IP address or hostname.
3029

3130
If you followed the [Install Ingress Controller](../install-ingress/) guide, you should already have this set. If not, or if you're using an existing ingress controller, run this command:
3231

3332
```bash
3433
export INGRESS_IP=$(kubectl get service ingress-nginx-controller --namespace=ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}{.status.loadBalancer.ingress[0].hostname}')
3534
echo "Ingress IP/Hostname: $INGRESS_IP"
3635
```
37-
You should now have the correct IP address or hostname stored in the `$INGRESS_IP` environment variable. If the command doesn't print any value, please repeat it after some time.
36+
This will store the correct IP or hostname in the $INGRESS_IP environment variable. If no value is returned, wait a short while and try again.
3837

3938
{{% notice Note %}}
40-
If your ingress controller service has a different name or namespace, adjust the command accordingly. For example, some installations use `nginx-ingress-controller` or place it in a different namespace.
39+
If your ingress controller service uses a different name or namespace, update the command accordingly. For example, some installations use `nginx-ingress-controller` or place it in a different namespace.
4140
{{% /notice %}}
4241

43-
## Step 1Create the application and Ingress
42+
## Step 2Deploy the application and configure Ingress
4443

45-
Let's start with deploying an application that responds to an incoming HTTP server and is exposed via Ingress. You can check the source code of the application on [GitHub](https://github.com/kedify/examples/tree/main/samples/http-server).
44+
Now you will deploy a simple HTTP server and expose it using an Ingress resource. The source code for this application is available on [GitHub](https://github.com/kedify/examples/tree/main/samples/http-server).
4645

4746
#### Deploy the application
4847

49-
Run the following command to deploy our application:
48+
Run the following command to deploy your application:
5049

5150
```bash
5251
cat <<'EOF' | kubectl apply -f -
@@ -123,37 +122,37 @@ Notes:
123122

124123
#### Verify the application is running correctly
125124

126-
Let's check that we have 1 replica of the application deployed and ready:
125+
You will now check if you have 1 replica of the application deployed and ready:
127126

128127
```bash
129128
kubectl get deployment application
130129
```
131130

132-
In the output we should see 1 replica ready:
133-
```
131+
In the output you should see 1 replica ready:
132+
```output
134133
NAME READY UP-TO-DATE AVAILABLE AGE
135134
application 1/1 1 1 3m44s
136135
```
137136

138137
#### Test the application
139-
Hit the app to confirm the app is ready and routing works:
138+
Once the application and Ingress are deployed, verify that everything is working correctly by sending a request to the exposed endpoint. Run the following command:
140139

141140
```bash
142141
curl -I -H "Host: application.keda" http://$INGRESS_IP
143142
```
144143

145-
You should see similar output:
146-
```
144+
If the routing is set up properly, you should see a response similar to:
145+
```output
147146
HTTP/1.1 200 OK
148147
Date: Thu, 11 Sep 2025 14:11:24 GMT
149148
Content-Type: text/html
150149
Content-Length: 301
151150
Connection: keep-alive
152151
```
153152

154-
## Step 2 — Enable autoscaling with Kedify
153+
## Step 3 — Enable autoscaling with Kedify
155154

156-
The application is currectly running, Now we will enable autoscaling on this app, we will scale from 0 to 10 replicas. No request shall be lost at any moment. To do that, please run the following command to deploy our `ScaledObject`:
155+
The application is now running. Next, you will enable autoscaling so that it can scale dynamically between 0 and 10 replicas. Kedify ensures that no requests are dropped during scaling. Apply the `ScaledObject` by running the following command:
157156

158157
```bash
159158
cat <<'EOF' | kubectl apply -f -
@@ -193,25 +192,25 @@ spec:
193192
EOF
194193
```
195194

196-
What the key fields do:
197-
- `type: kedify-http`Use Kedify’s HTTP scaler.
198-
- `hosts`, `pathPrefixes`Which requests to observe for scaling.
199-
- `service`, `port`The Service and port receiving traffic.
200-
- `scalingMetric: requestRate` and `targetValue: 10`Target 1000 req/s (per granularity/window) before scaling out.
201-
- `minReplicaCount: 0`Allows scale-to-zero when idle.
202-
- `trafficAutowire: ingress`Lets Kedify auto-wire your Ingress to the kedify-proxy.
195+
Key Fields explained:
196+
- `type: kedify-http`Specifies that Kedify’s HTTP scaler should be used.
197+
- `hosts`, `pathPrefixes`Define which requests are monitored for scaling decisions.
198+
- `service`, `port`TIdentify the Kubernetes Service and port that will receive the traffic.
199+
- `scalingMetric: requestRate` and `targetValue: 10`Scale out when request rate exceeds the target threshold (e.g., 1000 req/s per window, depending on configuration granularity).
200+
- `minReplicaCount: 0`Enables scale-to-zero when there is no traffic.
201+
- `trafficAutowire: ingress`Automatically wires your Ingress to the Kedify proxy for seamless traffic management.
203202

204-
After applying, the ScaledObject will appear in the Kedify dashboard (https://dashboard.kedify.io/).
203+
After applying, the `ScaledObject` will appear in the Kedify dashboard (https://dashboard.kedify.io/).
205204

206205
![Kedify Dashboard With ScaledObject](images/scaledobject.png)
207206

208-
## Step 3 — Send traffic and observe scaling
207+
## Step 4 — Send traffic and observe scaling
209208

210-
Becuase we are not sending any traffic to our application, after some time, it should be scaled to zero.
209+
Since no traffic is currently being sent to the application, it will eventually scale down to zero replicas.
211210

212211
#### Verify scale to zero
213212

214-
Run this command and wait until there is 0 replicas:
213+
To confirm that the application has scaled down, run the following command and watch until the number of replicas reaches 0:
215214

216215
```bash
217216
watch kubectl get deployment application -n default
@@ -224,31 +223,35 @@ Every 2,0s: kubectl get deployment application -n default
224223
NAME READY UP-TO-DATE AVAILABLE AGE
225224
application 0/0 0 0 110s
226225
```
226+
This continuously monitors the deployment status in the default namespace. Once traffic stops and the idle window has passed, you should see the application deployment report 0/0 replicas, indicating that it has successfully scaled to zero.
227227

228228
#### Verify the app can scale from zero
229229

230-
Now, hit the app again, it should be scaled to 1 replica and return back correct response:
230+
Next, test that the application can scale back up from zero when traffic arrives. Send a request to the app:
231+
231232
```bash
232233
curl -I -H "Host: application.keda" http://$INGRESS_IP
233234
```
234-
235-
You should see a 200 OK response. Next, generate sustained load. You can use `hey` (or a similar tool):
235+
The application should scale from 0 → 1 replica automatically.
236+
You should receive an HTTP 200 OK response, confirming that the service is reachable again.
236237

237238
#### Test higher load
238239

240+
Now, generate a heavier, sustained load against the application. You can use `hey` (or a similar benchmarking tool):
241+
239242
```bash
240243
hey -n 40000 -c 200 -host "application.keda" http://$INGRESS_IP
241244
```
242245

243-
While the load runs, watch replicas change:
246+
While the load test is running, open another terminal and monitor the deployment replicas in real time:
244247

245248
```bash
246249
watch kubectl get deployment application -n default
247250
```
248251

249-
For example something like this:
252+
You will see the number of replicas change dynamically. For example:
250253

251-
```
254+
```output
252255
Every 2,0s: kubectl get deployment application -n default
253256
254257
NAME READY UP-TO-DATE AVAILABLE AGE
@@ -259,23 +262,22 @@ Expected behavior:
259262
- On bursty load, Kedify scales the Deployment up toward `maxReplicaCount`.
260263
- When traffic subsides, replicas scale down. After the cooldown, they can return to zero.
261264

262-
You can also observe traffic and scaling in the Kedify dashboard:
265+
You can also monitor traffic and scaling in the Kedify dashboard:
263266

264267
![Kedify Dashboard ScaledObject Detail](images/load.png)
265268

266269
## Clean up
267270

271+
When you have finished testing, remove the resources created in this Learning Path to free up your cluster:
272+
268273
```bash
269274
kubectl delete scaledobject application
270275
kubectl delete ingress application-ingress
271276
kubectl delete service application-service
272277
kubectl delete deployment application
273278
```
279+
This will delete the `ScaledObject`, Ingress, Service, and Deployment associated with the demo application.
274280

275281
## Next steps
276282

277-
Explore the official Kedify [How-to guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes.
278-
279-
### See also
280-
281-
- Kedify documentation: https://docs.kedify.io
283+
To go futher, you can explore the Kedify [How-to guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes.

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ weight: 3
44
layout: "learningpathall"
55
---
66

7-
Before deploying HTTP applications with Kedify autoscaling, you need an Ingress Controller to handle incoming traffic. Most major cloud providers (AWS EKS, Google GKE, Azure AKS) do not include an Ingress Controller by default in their managed Kubernetes offerings.
7+
Before deploying HTTP applications with Kedify autoscaling, you need an Ingress Controller to handle incoming traffic. Most managed Kubernetes services offered by major cloud providers (AWS EKS, Google GKE, Azure AKS) do not include an Ingress Controller by default.
88

99
{{% notice Note %}}
1010
If your cluster already has an Ingress Controller installed and configured, you can skip this step and proceed directly to the [HTTP Scaling guide](../http-scaling/).
@@ -64,13 +64,13 @@ This will save the external IP or hostname in the `INGRESS_IP` environment varia
6464

6565
## Configure Access
6666

67-
For this tutorial, you have two options:
67+
To configure access to the ingress controller, you have two options:
6868

6969
### Option 1: DNS Setup (Recommended for production)
7070
Point `application.keda` to your ingress controller's external IP/hostname using your DNS provider.
7171

7272
### Option 2: Host Header (Quick setup)
73-
Use the external IP/hostname directly with a `Host:` header in your requests. When testing, you'll use:
73+
Use the external IP/hostname directly with a `Host:` header in your requests. When testing, you will use:
7474

7575
```bash
7676
curl -H "Host: application.keda" http://$INGRESS_IP
@@ -80,14 +80,13 @@ The `$INGRESS_IP` environment variable contains the actual external IP or hostna
8080

8181
## Verification
8282

83-
Test that the ingress controller is working by checking its readiness:
83+
Verify that the ingress controller is working by checking its readiness:
8484

8585
```bash
8686
kubectl get pods --namespace ingress-nginx
8787
```
8888

8989
You should see the `ingress-nginx-controller` pod in `Running` status.
9090

91-
## Next Steps
9291

93-
Now that you have an Ingress Controller installed and configured, proceed to the [HTTP Scaling guide](../http-scaling/) to deploy an application and configure Kedify autoscaling.
92+
Now that you have an Ingress Controller installed and configured, proceed to the next section to deploy an application and configure Kedify autoscaling.

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,19 @@ weight: 2
44
layout: "learningpathall"
55
---
66

7-
This page installs Kedify on your cluster using Helm. You’ll add the Kedify chart repo, install KEDA (Kedify build), the HTTP Scaler, and the Kedify Agent, then verify everything is running.
7+
In this section you will learn how to install Kedify on your Kubernetes cluster using Helm. You will add the Kedify chart repo, install KEDA (Kedify build), the HTTP Scaler, and the Kedify Agent, then verify everything is running.
88

9-
For more details and all installation methods, see Kedify installation docs: https://docs.kedify.io/installation/helm#installation-on-arm
9+
For more details and all installation methods on Arm, you can refer to the [Kedify installation docs](https://docs.kedify.io/installation/helm#installation-on-arm)
1010

11-
## Prerequisites
11+
## Before you begin
1212

13-
- A running Kubernetes cluster (kind, minikube, EKS, GKE, AKS, etc.)
14-
- kubectl and helm installed and configured to talk to your cluster
15-
- Kedify Service account (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one
13+
You will need:
1614

17-
## Prepare installation
15+
- A running Kubernetes cluster (e.g., kind, minikube, EKS, GKE, AKS, etc.), hosted on any cloud provider or local environment.
16+
- kubectl and helm installed and configured to communicate with your cluster
17+
- A Kedify Service account (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one
18+
19+
## Installation
1820

1921
1) Get your Organization ID: In the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> Details and copy the ID.
2022

@@ -25,7 +27,7 @@ For more details and all installation methods, see Kedify installation docs: htt
2527
kubectl get secret -n keda kedify-agent -o=jsonpath='{.data.apikey}' | base64 --decode
2628
```
2729

28-
- Otherwise, in the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> API Keys, click Create Agent Key, and copy the key.
30+
Otherwise, in the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> API Keys, click Create Agent Key, and copy the key.
2931

3032
Note: The API Key is shared across all your Agent installations. If you regenerate it, update existing Agent installs and keep it secret.
3133

@@ -40,9 +42,9 @@ helm repo update
4042

4143
## Helm installation
4244

43-
Most providers like AWS EKS and Azure AKS automatically place pods on ARM nodes when you specify `nodeSelector` for `kubernetes.io/arch=arm64`. However, Google Kubernetes Engine (GKE) applies an explicit taint on ARM nodes, requiring matching `tolerations`.
45+
Most providers like AWS EKS and Azure AKS automatically place pods on Arm nodes when you specify `nodeSelector` for `kubernetes.io/arch=arm64`. However, Google Kubernetes Engine (GKE) applies an explicit taint on Arm nodes, requiring matching `tolerations`.
4446

45-
To ensure a portable deployment strategy across all cloud providers, we recommend configuring both `nodeSelector` and `tolerations` in your Helm values or CLI flags.
47+
To ensure a portable deployment strategy across all cloud providers, it is recommended that you configure both `nodeSelector` and `tolerations` in your Helm values or CLI flags.
4648

4749
Install each component into the keda namespace. Replace placeholders where noted.
4850

@@ -101,13 +103,15 @@ helm upgrade --install kedify-agent kedifykeda/kedify-agent \
101103

102104
## Verify installation
103105

106+
You are now ready to verify your installation:
107+
104108
```bash
105109
kubectl get pods -n keda
106110
```
107111

108-
Expected example (names may differ):
112+
Expected output should look like (names may differ):
109113

110-
```text
114+
```output
111115
NAME READY STATUS RESTARTS AGE
112116
keda-add-ons-http-external-scaler-xxxxx 1/1 Running 0 1m
113117
keda-add-ons-http-interceptor-xxxxx 1/1 Running 0 1m
@@ -117,4 +121,4 @@ keda-operator-metrics-apiserver-xxxxx 1/1 Running 0 1m
117121
kedify-agent-xxxxx 1/1 Running 0 1m
118122
```
119123

120-
Proceed to the next section to deploy a sample HTTP app and test autoscaling.
124+
Proceed to the next section to learn how to install an Ingress controller before deploying a sample HTTP app and testing autoscaling.

0 commit comments

Comments
 (0)