Skip to content

Commit ee56b4c

Browse files
Updates
1 parent f487050 commit ee56b4c

File tree

2 files changed

+15
-13
lines changed

2 files changed

+15
-13
lines changed

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ There are three main components involved in the process:
3030

3131
## Configure the Ingress IP environment variable
3232

33-
Before testing the application, make sure the INGRESS_IP environment variable is set to your ingress controller’s external IP address or hostname.
33+
Before testing the application, make sure the `INGRESS_IP` environment variable is set to your ingress controller’s external IP address or hostname.
3434

3535
If you followed the [Install Ingress Controller](../install-ingress/) guide, you should already have this set. If not, or if you're using an existing ingress controller, run this command:
3636

@@ -200,13 +200,16 @@ spec:
200200
EOF
201201
```
202202

203-
Key fields explained:
204-
- `type: kedify-http` - specifies that Kedify’s HTTP scaler should be used
205-
- `hosts`, `pathPrefixes` - define which requests are monitored for scaling decisions
206-
- `service`, `port` - identify the Kubernetes Service and port that will receive the traffic
207-
- `scalingMetric: requestRate`, `granularity: 1s`, and `targetValue: "10"` - scale out when sustained request rate exceeds ~10 req/s per replica
208-
- `minReplicaCount: 0` — Enables scale-to-zero when there is no traffic.
209-
- `trafficAutowire: ingress` — Automatically wires your Ingress to the Kedify proxy for seamless traffic management.
203+
## Key fields explained
204+
205+
Use the following field descriptions to understand how the `ScaledObject` controls HTTP-driven autoscaling and how each setting affects traffic routing and scale decisions:
206+
207+
- `type: kedify-http` - Uses Kedify’s HTTP scaler.
208+
- `hosts`, `pathPrefixes` - Define which requests are monitored for scaling decisions.
209+
- `service`, `port` - Identify the Kubernetes Service and port that receive traffic.
210+
- `scalingMetric: requestRate`, `granularity: 1s`, `window: 10s`, `targetValue: "10"` - Scales out when the average request rate exceeds ~10 requests/second (rps) per replica over the last 10 seconds.
211+
- `minReplicaCount: 0` - Enables scale to zero when there is no traffic.
212+
- `trafficAutowire: ingress` - Automatically wires your Ingress to the Kedify proxy for seamless traffic management.
210213

211214
After applying, the `ScaledObject` will appear in the Kedify dashboard (https://dashboard.kedify.io/).
212215

@@ -224,7 +227,7 @@ To confirm that the application has scaled down, run the following command and w
224227
watch kubectl get deployment application -n default
225228
```
226229

227-
You should output similar to:
230+
You should see output similar to:
228231
```output
229232
Every 2,0s: kubectl get deployment application -n default
230233
@@ -233,14 +236,13 @@ application 0/0 0 0 110s
233236
```
234237
This continuously monitors the deployment status in the default namespace. Once traffic stops and the idle window has passed, you should see the application deployment report 0/0 replicas, indicating that it has successfully scaled to zero.
235238

236-
### Verify the app can scale from zero
239+
## Verify the app can scale from zero
237240

238241
Send a request to trigger scale-up:
239242

240243
```bash
241244
curl -I -H "Host: application.keda" http://$INGRESS_IP
242245
```
243-
The application should scale from 0 → 1 replica automatically.
244246
You should receive an HTTP 200 OK response, confirming that the service is reachable again.
245247

246248
The application scales from 0 → 1 replica automatically, and you should receive an HTTP `200 OK` response.
@@ -288,4 +290,4 @@ This will delete the `ScaledObject`, Ingress, Service, and Deployment associated
288290

289291
## Next steps
290292

291-
To go further, you can explore the Kedify [How-to guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes.
293+
To go further, you can explore the [Kedify How-To Guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes.

content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ layout: "learningpathall"
66

77
## Install an ingress controller for HTTP autoscaling on Kubernetes
88

9-
Before deploying HTTP applications with Kedify autoscaling, you need an ingress controller to handle incoming traffic. Most managed Kubernetes services (AWS EKS, Google GKE, Azure AKS) do not include an ingress controller by default. In this Learning Path, you install the NGINX Ingress Controller with Helm and target Arm64 nodes.
9+
Before deploying HTTP applications with Kedify autoscaling, you need an ingress controller to handle incoming traffic. Most managed Kubernetes services (AWS EKS, Google GKE, Azure AKS) do not include an ingress controller by default. In this Learning Path, you install the NGINX Ingress Controller with Helm and target arm64 nodes.
1010

1111
{{% notice Note %}}
1212
If your cluster already has an ingress controller installed and configured, you can skip this step and proceed to the [Autoscale HTTP applications with Kedify and Kubernetes Ingress section](../http-scaling/).

0 commit comments

Comments
 (0)