Skip to content

Actions: ggml-org/llama.cpp

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
37,975 workflow run results
37,975 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

SYCL: Add mrope kernel
Pull Request Labeler #11792: Pull request #13755 synchronize by qnixsynapse
38m 21s
feat: Optimize rope operations with vectorization
Python check requirements.txt #3214: Commit c722720 pushed by qnixsynapse
15m 10s sycl/mrope
ggml-cpu : split arch-specific implementations
Pull Request Labeler #11791: Pull request #13892 opened by xctan
41m 11s
kv-cache : refactor + add llama_memory_state_i
Pull Request Labeler #11790: Pull request #13746 synchronize by ggerganov
33m 27s
kv-cache : refactor + add llama_memory_state_i
Pull Request Labeler #11789: Pull request #13746 synchronize by ggerganov
1h 0m 1s
cmake: Guard GGML_CPU_ALL_VARIANTS by architecture
Pull Request Labeler #11788: Pull request #13890 opened by ckastner
53m 2s
[WIP] model: add new model minimax-text-01
Pull Request Labeler #11787: Pull request #13889 opened by qscqesze
33m 39s