Skip to content

Actions: ggml-org/llama.cpp

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
58,571 workflow run results
58,571 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

64 bit CUDA copy routines via GGML_CUDA_ALLOW_LARGE_TENSORS
Pull Request Labeler #16018: Pull request #15298 synchronize by createthis
1h 36m 44s
Ignore vim swap files in tests
Pull Request Labeler #16017: Pull request #15901 opened by createthis
1h 27m 29s
tests : filter out no-ops from coverage report
Pull Request Labeler #16016: Pull request #15900 opened by danbev
44m 53s
Add docker protocol support for llama-server model loading
Pull Request Labeler #16015: Pull request #15790 synchronize by ericcurtin
30m 16s
Add docker protocol support for llama-server model loading
Pull Request Labeler #16014: Pull request #15790 synchronize by ericcurtin
22m 17s
metal : make the backend async
Build on RISCV Linux Machine by Cloud-V #915: Pull request #15832 synchronize by ggerganov
metal : make the backend async
Pull Request Labeler #16013: Pull request #15832 synchronize by ggerganov
11m 31s
ProTip! You can narrow down the results and go further in time using created:<2025-09-09 or the other filters available.