llama-cpp: allow to override ROCM GPU targets#435263
llama-cpp: allow to override ROCM GPU targets#435263philiptaron merged 1 commit intoNixOS:masterfrom
Conversation
|
|
Requesting review from known AMD GPU person @K900. Please forward if you know someone better. |
|
I will merge if they approve! |
|
Would it be better to have a single Otherwise seems fine. |
|
I'd rather not invent an abstraction mechanism like that, but if one already exists and is solid, let's consider using it. |
|
torch: nixpkgs/pkgs/development/python-modules/torch/source/default.nix Lines 177 to 193 in 1236127 whisper: So we already have fragmentation. No strong feelings. |
|
llama-cpp can be built with both ROCM and CUDA enabled at the same time, so it's better not to mix architectures in the same arg. |
|
@philiptaron should this be merged then? :) |
|
I'm looking for the approval ☑️ |
It allows to significantly reduce build time in case of building for a single GPU. The same was done for whisper-cpp and llama-cpp flake.
Things done
passthru.tests.nixpkgs-reviewon this PR. See nixpkgs-review usage../result/bin/.Add a 👍 reaction to pull requests you find important.