Skip to content

Actions: ggml-org/llama.cpp

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
56,456 workflow run results
56,456 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add docker protocol support for llama-server model loading
Pull Request Labeler #16188: Pull request #15790 synchronize by ericcurtin
28m 57s
llama : allow using iGPUs with --device
Build on RISCV Linux Machine by Cloud-V #1083: Pull request #15951 synchronize by slaren
llama : allow using iGPUs with --device
Pull Request Labeler #16187: Pull request #15951 synchronize by slaren
8m 39s
llama : allow using iGPUs with --device
Build on RISCV Linux Machine by Cloud-V #1082: Pull request #15951 opened by slaren
llama : allow using iGPUs with --device
Pull Request Labeler #16186: Pull request #15951 opened by slaren
2m 42s
metal : allow ops to run concurrently
Pull Request Labeler #16185: Pull request #15929 synchronize by ggerganov
8m 26s
ProTip! You can narrow down the results and go further in time using created:<2025-09-12 or the other filters available.