Skip to content

Kangal 1.6.1 K8s 1.24 race condition jmeter master/worker  #332

@flah00

Description

@flah00

I am bumping up against a race condition between the jmeter master and worker pods. It used to be that most kangal runs configured themselves and succeeded. The workers would be running before the master. And the master was able to register all of the workers. However, 1 in 4 masters would come up before all of the workers and only register a portion of the workers.

Recently I found the masters always running before the workers. Even if I only had 1 worker.

I can see the workers in a pending state and then the master pod goes pending 5-10 seconds later. But lately I've noticed the workers in init and the master is already running.
This leads to 0 workers detected and I have to manually delete the master so it registers the worker pods. The backoff limit of the master job is hard coded to 1. So I can only kill the master pod once, before the job is considered a failure.

Configuration

Our jmeter pods are tainted so they run on nodes dedicated to their purpose. Karpenter 0.33.1 handles this for us. I am also using custom data.

Solution?

It seems like the master job shouldn't be scheduled until all of the workers are running.

Work around

To work around this, a script submits the kangal load test and waits for the master job. The job is immediately patched to be suspended. The script waits until all workers running. Then the master job is unsuspended. This consistently prevents jmeter master from have too few workers registered.

Conclusion

Is anyone running into this? If so, how are you dealing with it?
If it is widespread, can we look to the kangal controller for relief?

Metadata

Metadata

Assignees

No one assigned

    Labels

    pinnedissues that should be kept open

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions