Skip to content

Commit 1adc981

Browse files
authored
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)
The flake.nix included references to llama-cpp.cachix.org cache with a comment claiming it's 'Populated by the CI in ggml-org/llama.cpp', but: 1. No visible CI workflow populates this cache 2. The cache is empty for recent builds (tested b6150, etc.) 3. This misleads users into expecting pre-built binaries that don't exist This change removes the non-functional cache references entirely, leaving only the working cuda-maintainers cache that actually provides CUDA dependencies. Users can still manually add the llama-cpp cache if it becomes functional in the future.
1 parent b3e1666 commit 1adc981

File tree

1 file changed

+0
-5
lines changed

1 file changed

+0
-5
lines changed

flake.nix

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,6 @@
3636
# ```
3737
# nixConfig = {
3838
# extra-substituters = [
39-
# # Populated by the CI in ggml-org/llama.cpp
40-
# "https://llama-cpp.cachix.org"
41-
#
4239
# # A development cache for nixpkgs imported with `config.cudaSupport = true`.
4340
# # Populated by https://hercules-ci.com/github/SomeoneSerge/nixpkgs-cuda-ci.
4441
# # This lets one skip building e.g. the CUDA-enabled openmpi.
@@ -47,10 +44,8 @@
4744
# ];
4845
#
4946
# # Verify these are the same keys as published on
50-
# # - https://app.cachix.org/cache/llama-cpp
5147
# # - https://app.cachix.org/cache/cuda-maintainers
5248
# extra-trusted-public-keys = [
53-
# "llama-cpp.cachix.org-1:H75X+w83wUKTIPSO1KWy9ADUrzThyGs8P5tmAbkWhQc="
5449
# "cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
5550
# ];
5651
# };

0 commit comments

Comments
 (0)