9090 worker:
9191 command: "--locustfile /lotest/src/demo_test.py"
9292 replicas: 2
93+ observability:
94+ openTelemetry:
95+ enabled: true
96+ endpoint: "otel-collector:4317"
97+ insecure: true
9398EOF
9499
95100# 5. Watch progress
@@ -217,6 +222,11 @@ spec:
217222 worker:
218223 command: "--locustfile /lotest/src/demo_test.py"
219224 replicas: 2
225+ observability:
226+ openTelemetry:
227+ enabled: true
228+ endpoint: "otel-collector:4317"
229+ insecure: true
220230EOF
221231```
222232
@@ -227,6 +237,7 @@ This creates a distributed load test with:
227237- ** Spawn rate** : 2 users per second
228238- ** Duration** : 1 minute
229239- ** Workers** : 2 worker replicas
240+ - ** OpenTelemetry** : Enabled
230241
231242### Step 5: Watch Test Execution
232243
@@ -284,7 +295,7 @@ Then open http://localhost:8089 in your browser to see:
284295- Worker status
285296
286297!!! note "Web UI Availability"
287- The web UI is only available while the master pod is running. After the test completes, the Job pod may terminate (depending on your TTL settings) .
298+ The web UI is available while the master job is running. After the test completes (1 minute runtime) , the job stays in completed state and you can still port-forward to view final results .
288299
289300### Step 7: Cleanup
290301
@@ -419,7 +430,7 @@ kubectl logs -l app=locust,role=worker --tail=50
419430
420431## Advanced Testing
421432
422- ### Test with Local Build
433+ ### Testing with Local Builds
423434
424435To test local changes to the operator code:
425436
@@ -437,142 +448,21 @@ helm install locust-operator ./charts/locust-k8s-operator \
437448 --set image.pullPolicy=IfNotPresent
438449```
439450
440- !!! tip "Use Local Chart"
441- When testing Helm chart changes, install from your local chart directory instead of the published repository.
451+ ### Testing Production Configuration
442452
443- ### Test Production Features
444-
445- Try the production-ready example with resource limits, affinity, and tolerations:
453+ Test resource limits, node affinity, and other production features:
446454
447455``` bash
448456kubectl apply -f config/samples/locusttest_v2_production.yaml
449457```
450458
451- Or test OpenTelemetry integration:
452-
453- ``` bash
454- kubectl apply -f config/samples/locust_v2_locusttest_with_otel.yaml
455- ```
456-
457- ??? example "Custom Load Profile"
458- Create a more realistic load test with custom stages:
459- ```yaml
460- apiVersion: locust.io/v2
461- kind: LocustTest
462- metadata:
463- name: staged-test
464- spec:
465- image: locustio/locust:2.20.0
466- testFiles:
467- configMapRef: demo-test
468- master:
469- command: |
470- --locustfile /lotest/src/demo_test.py
471- --host https://httpbin.org
472- --headless
473- --users 100
474- --spawn-rate 10
475- --run-time 5m
476- resources:
477- requests:
478- cpu: 500m
479- memory: 512Mi
480- limits:
481- cpu: 1000m
482- memory: 1Gi
483- worker:
484- command: "--locustfile /lotest/src/demo_test.py"
485- replicas: 5
486- resources:
487- requests:
488- cpu: 500m
489- memory: 512Mi
490- limits:
491- cpu: 1000m
492- memory: 1Gi
493- ```
494-
495- ### Automated Validation Script
496-
497- Create a validation script for CI/CD pipelines:
498-
499- ``` bash
500- #! /bin/bash
501- set -e
502-
503- # Validation script for Kind cluster testing
504- CLUSTER_NAME=" locust-ci-test"
505- NAMESPACE=" locust-system"
506- TEST_NAME=" ci-validation"
507-
508- echo " Creating Kind cluster..."
509- kind create cluster --name $CLUSTER_NAME
510-
511- echo " Installing operator..."
512- helm repo add locust-k8s-operator https://abdelrhmanhamouda.github.io/locust-k8s-operator/
513- helm repo update
514- helm install locust-operator locust-k8s-operator/locust-k8s-operator \
515- --namespace $NAMESPACE \
516- --create-namespace \
517- --wait \
518- --timeout 5m
519-
520- echo " Waiting for operator to be ready..."
521- kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=locust-k8s-operator \
522- -n $NAMESPACE --timeout=5m
523-
524- echo " Creating test resources..."
525- kubectl create configmap demo-test --from-literal=demo_test.py='
526- from locust import HttpUser, task
527- class DemoUser(HttpUser):
528- @task
529- def get_homepage(self):
530- self.client.get("/")
531- '
532-
533- kubectl apply -f - << EOF
534- apiVersion: locust.io/v2
535- kind: LocustTest
536- metadata:
537- name: $TEST_NAME
538- spec:
539- image: locustio/locust:2.20.0
540- testFiles:
541- configMapRef: demo-test
542- master:
543- command: "--locustfile /lotest/src/demo_test.py --host https://httpbin.org --users 10 --spawn-rate 2 --run-time 1m"
544- worker:
545- command: "--locustfile /lotest/src/demo_test.py"
546- replicas: 2
547- EOF
548-
549- echo " Waiting for test to complete..."
550- kubectl wait --for=jsonpath=' {.status.phase}' =Succeeded locusttest/$TEST_NAME \
551- --timeout=5m || {
552- echo " Test failed or timed out!"
553- kubectl describe locusttest/$TEST_NAME
554- kubectl logs job/${TEST_NAME} -master
555- exit 1
556- }
557-
558- echo " Validation successful!"
559- kubectl get locusttest/$TEST_NAME -o yaml
560-
561- echo " Cleaning up..."
562- kind delete cluster --name $CLUSTER_NAME
563- ```
564-
565- ## Success Criteria
566-
567- The validation is successful if:
459+ This sample includes:
568460
569- 1 . ✅ Operator installs without errors
570- 2 . ✅ LocustTest CR is accepted and reconciled
571- 3 . ✅ Master job and worker deployment are created
572- 4 . ✅ Workers successfully connect to master
573- 5 . ✅ Load test runs and completes (transitions to ` Succeeded ` phase)
574- 6 . ✅ No errors in operator logs
575- 7 . ✅ Resources are cleaned up properly when CR is deleted
461+ - Resource requests and limits
462+ - Node affinity and tolerations
463+ - Horizontal Pod Autoscaler configuration
464+ - OpenTelemetry integration
465+ - Autostart/autoquit for automated testing
576466
577467## Next Steps
578468
0 commit comments