Skip to content

Headless not working correctly #3

@Spritekin

Description

@Spritekin

Hi,
First, congratulations on the operator. I was just looking for such piece. Thank you so much!

So I installed the operator (chart 0.1.8) , then I created a couple identical tests one normal and another headless. They both have one master and one worker.

Here are the created pods, notice the headless (hl) created two masters, however the one with the "Error" has all the run log while the runn which says running is still waiting for a worker to connect.

 % kubectl get pods
performance-test-hl-master-qqzml          1/1     Running   0             127m
performance-test-hl-master-vsxvl          0/1     Error     0             140m
performance-test-master-ht6xh             1/1     Running   0             19h
performance-test-worker-n78gp             1/1     Running   0             19h

Here are the LocustTest resources:

 % kubectl get lt 
NAME                               STATE     WORKERS   FAIL_RATIO   RPS   USERS   AGE
performance-test      RUNNING   1/1       0%           4     10      20h
performance-test-hl   CREATED                                        143m

And their specs, notice the difference is the --headless in the args

% kubectl get lt -o yaml        
apiVersion: v1
items:
- apiVersion: locust.cloud/v1
  kind: LocustTest
  metadata:
    annotations:
      locust.cloud/kopf-managed: "yes"
      locust.cloud/last-handled-configuration: |
        {"spec":{"args":"-f landmarkuser.py --host=https://my.site.com --run-time=10m --users=5 --spawn-rate=1","env":[{"name":"AUTH_TOKEN","value":"dummy=="}],"image":"locustio/locust:2.31.5","locustfile":{"configMap":{"name":"performance-test"}},"workers":1},"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"},"annotations":{"meta.helm.sh/release-name":"performance-test","meta.helm.sh/release-namespace":"default"}}}
      meta.helm.sh/release-name: performance-test
      meta.helm.sh/release-namespace: default
    creationTimestamp: "2026-01-29T07:32:12Z"
    finalizers:
    - locust.cloud/kopf-finalizer
    generation: 2
    labels:
      app.kubernetes.io/managed-by: Helm
    name: performance-test
    namespace: default
  spec:
    args: -f landmarkuser.py --host=https://my.site.com
      --run-time=10m --users=5 --spawn-rate=1
    env:
    - name: AUTH_TOKEN
      value: dummy==
    image: locustio/locust:2.31.5
    locustfile:
      configMap:
        name: performance-test
    workers: 1
  status:
    fail_ratio: 0%
    state: RUNNING
    total_rps: 4
    user_count: 10
    worker_count: 1
    worker_ratio: 1/1
- apiVersion: locust.cloud/v1
  kind: LocustTest
  metadata:
    annotations:
      locust.cloud/kopf-managed: "yes"
      locust.cloud/last-handled-configuration: |
        {"spec":{"args":"-f landmarkuser.py --host=https://my.site.com --run-time=10m --users=5 --spawn-rate=1 --headless","env":[{"name":"AUTH_TOKEN","value":"dummy=="}],"image":"locustio/locust:2.31.5","locustfile":{"configMap":{"name":"performance-test-hl"}},"workers":1},"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"},"annotations":{"meta.helm.sh/release-name":"performance-test-hl","meta.helm.sh/release-namespace":"default"}}}
      meta.helm.sh/release-name: performance-test-hl
      meta.helm.sh/release-namespace: default
    creationTimestamp: "2026-01-30T01:38:29Z"
    finalizers:
    - locust.cloud/kopf-finalizer
    generation: 1
    labels:
      app.kubernetes.io/managed-by: Helm
    name: performance-test-hl
    namespace: default
    resourceVersion: "219940277"
    uid: 78deda8a-3d39-4c9e-878f-a59c15b8f693
  spec:
    args: -f landmarkuser.py --host=https://my.site.com
      --run-time=10m --users=5 --spawn-rate=1 --headless
    env:
    - name: AUTH_TOKEN
      value: dummy==
    image: locustio/locust:2.31.5
    locustfile:
      configMap:
        name: performance-test-hl
    workers: 1
  status:
    state: CREATED
kind: List
metadata:
  resourceVersion: ""

Now to be honest, I feel a headless run on Kube might be unnecessary, its just not the purpose it was intended for. It would be much more helpful to create a "headless" sidecar to help controlling the run (i.e. a curl sidecar that starts a run using the web UI API after the webui starts)

% curl -X POST http://localhost:8089/swarm \
  -H "Content-Type: application/json" \
  -d '{
    "user_count": 10,
    "spawn_rate": 1,
    "host": "https://my.site.com"
  }'

Anyway I will leave this here.

Thanks again!

P.S. would be nice to register the chart in artifacthub.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions