fix(nix): remove non-functional llama-cpp cachix cache from flake.nix #15295
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
The flake.nix includes references to
llama-cpp.cachix.orgwith a comment claiming it's "Populated by the CI in ggml-org/llama.cpp", but this is misleading and caused confusion during setup.Personal Experience
I spent over an hour debugging why llama.cpp builds were taking so long, assuming the cache was functional based on the documentation. Only after extensive investigation did I realize the cache is not being populated.
Evidence the cache is non-functional:
No CI workflow exists: Searched
.github/workflows/- no workflows populate the cachix cacheCache is accessible but empty for recent builds:
Note: Dependencies are cached (155 paths, 704MB), but llama.cpp itself is not.
Solution
Remove the non-functional cache references entirely, leaving only the working
cuda-maintainers.cachix.orgcache that actually provides CUDA dependencies.This prevents other users from going through the same debugging process I did. The cache can always be re-added later if CI gets set up to populate it.
Testing