Skip to content

Conversation

@danbev
Copy link
Member

@danbev danbev commented Sep 16, 2025

This commit updates the runs-on field for the macOS arm64 webgpu build job to use macos-latest instead of just latest.

The motivation for this is that this job can wait for a runner to pick up the job for a very long time, sometimes over 7 hours. This is an attempt to see if this change can help reduce the wait time.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/17754163447/job/50454257570?pr=16004


With this change a runner picked up the job fairly quickly:
https://github.com/danbev/llama.cpp/actions/runs/17764796649/job/50485497894#logs

This commit updates the runs-on field for the macOS arm64 webgpu build
job to use macos-latest instead of just latest.

The motivation for this is that this job can wait for a runner to pick
up the job for a very long time, sometimes over 7 hours. This is an
attempt to see if this change can help reduce the wait time.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/17754163447/job/50454257570?pr=16004
@github-actions github-actions bot added the devops improvements to build systems and github actions label Sep 16, 2025
ctest -L main --verbose --timeout 900
macOS-latest-cmake-arm64-webgpu:
runs-on: latest
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this actually a self-hosted runner (latest) that` is used in this job?

Could someone bring me up to speed on why we could not use macos-latest instead which feels like it might be more reliable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is a self-hosted runner. If macos-latest runs, don't think there is a problem to use it. cc @reeselevine

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The job was picked up but there was a test failure which seems to be the same issue as the other macOS-latest-cmake-arm64 job fails on. So perhaps this could be merged just the same as it looks like it at least enabled the job to run.

@danbev danbev merged commit 7747553 into ggml-org:master Sep 16, 2025
42 of 44 checks passed
angt pushed a commit to angt/llama.cpp that referenced this pull request Sep 17, 2025
This commit updates the runs-on field for the macOS arm64 webgpu build
job to use macos-latest instead of just latest.

The motivation for this is that this job can wait for a runner to pick
up the job for a very long time, sometimes over 7 hours. This is an
attempt to see if this change can help reduce the wait time.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/17754163447/job/50454257570?pr=16004
@danbev danbev deleted the ci-macos-latest-make-arm64-webgpu branch September 24, 2025 06:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

devops improvements to build systems and github actions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants