Skip to content

Conversation

@bhcopeland
Copy link
Member

Add tree priority lookup using string values (high/medium/low) instead
of numeric priorities. Tree priorities are read from build configs and
passed to LAVA runtime via node data.

Priority assignments:

  • high: mainline, stable, stable-rc
  • medium: next, stable-rt, kselftest
  • low: android, cip, kernelci (default)

Human submissions always get highest priority for bisection/debugging.

Signed-off-by: Ben Copeland ben.copeland@linaro.org

Define SERVICE_PIPELINE constant in base.py and import it in
trigger.py and send_kcidb.py to replace hardcoded strings.

Signed-off-by: Ben Copeland <ben.copeland@linaro.org>
Add tree priority lookup using string values (high/medium/low) instead
of numeric priorities. Tree priorities are read from build configs and
passed to LAVA runtime via node data.

Priority assignments:
- high: mainline, stable, stable-rc
- medium: next, stable-rt, kselftest
- low: android, cip, kernelci (default)

Human submissions always get highest priority for bisection/debugging.

Signed-off-by: Ben Copeland <ben.copeland@linaro.org>
@patersonc
Copy link
Contributor

Hi @bhcopeland

Thanks for the PR.

There are a few instances of priority_min and priority_max in config/pipeline.yaml. Do these need updating somehow? Or do these serve a different purpose?

Chris

@patersonc
Copy link
Contributor

Another query,

Would we be better to avoid using the "high" priority in the LAVA jobs?

In doing so labs won't have a way to inject jobs with a higher priority over incoming KernelCI jobs as "high" is the maximum priority level (in my understanding).

Maybe we could keep with "low, medium, high" in the kernelci yaml files, but translate that to something like "20, 50, 80" for the LAVA job definitions?

@bhcopeland
Copy link
Member Author

Hi @bhcopeland

Thanks for the PR.

There are a few instances of priority_min and priority_max in config/pipeline.yaml. Do these need updating somehow? Or do these serve a different purpose?

Chris

No worries!

This is correct, this allows each lab to define proirty range. The approach scales between 0-100, to fit within each lab's range. IMO, they don't need updating; they serve different purposes.

Another query,

Would we be better to avoid using the "high" priority in the LAVA jobs?

In doing so labs won't have a way to inject jobs with a higher priority over incoming KernelCI jobs as "high" is the maximum priority level (in my understanding).

Maybe we could keep with "low, medium, high" in the kernelci yaml files, but translate that to something like "20, 50, 80" for the LAVA job definitions?

You're right. If we use the full range (90-100), labs can't inject their own urgent jobs above KernelCI jobs.

My current values

  PRIORITY_HIGHEST = 90  # human submissions
  PRIORITY_HIGH = 75 # mainline, stable
  PRIORITY_MEDIUM = 50 # next, kselftest
  PRIORITY_LOW = 25 # android, cip

Suggested:

  PRIORITY_HIGHEST = 80  # human submissions (was 90)
  PRIORITY_HIGH = 60     # mainline, stable
  PRIORITY_MEDIUM = 40   # next, kselftest
  PRIORITY_LOW = 20      # android, cip

This will 81-100 for labs to use for their own urgent jobs.

@patersonc
Copy link
Contributor

Okay thanks Ben. Sounds good to me.

I couldn't see where PRIORITY_HIGH etc. were defined?

@patersonc
Copy link
Contributor

Okay thanks Ben. Sounds good to me.

I couldn't see where PRIORITY_HIGH etc. were defined?

Scratch that - I've just seen the PR you've linked to!

@bhcopeland
Copy link
Member Author

https://lava.ciplatform.org/scheduler/job/1379868 I did a temp push with high priority to make sure it is set, and it worked fine.

Copy link
Member

@nuclearcat nuclearcat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed in Discord - this PR are well tested, so merging right now.

@nuclearcat nuclearcat added this pull request to the merge queue Jan 15, 2026
Merged via the queue into kernelci:main with commit 8dfdd95 Jan 15, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants