Skip to content

Commit 9297568

Browse files
committed
Add backend/index.yaml entries for llama-cpp on rocm7
Signed-off-by: Simon Redman <[email protected]>
1 parent 88a3ced commit 9297568

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

backend/index.yaml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
nvidia: "cuda12-llama-cpp"
2323
intel: "intel-sycl-f16-llama-cpp"
2424
amd-rocm-6: "rocm6-llama-cpp"
25+
amd-rocm-7: "rocm7-llama-cpp"
2526
metal: "metal-llama-cpp"
2627
vulkan: "vulkan-llama-cpp"
2728
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp"
@@ -534,6 +535,7 @@
534535
nvidia: "cuda12-llama-cpp-development"
535536
intel: "intel-sycl-f16-llama-cpp-development"
536537
amd-rocm-6: "rocm6-llama-cpp-development"
538+
amd-rocm-7: "rocm7-llama-cpp-development"
537539
metal: "metal-llama-cpp-development"
538540
vulkan: "vulkan-llama-cpp-development"
539541
nvidia-l4t: "nvidia-l4t-arm64-llama-cpp-development"
@@ -662,6 +664,11 @@
662664
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-amd-rocm-6-llama-cpp"
663665
mirrors:
664666
- localai/localai-backends:latest-gpu-amd-rocm-6-llama-cpp
667+
- !!merge <<: *llamacpp
668+
name: "rocm7-llama-cpp"
669+
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-amd-rocm-7-llama-cpp"
670+
mirrors:
671+
- localai/localai-backends:latest-gpu-amd-rocm-7-llama-cpp
665672
- !!merge <<: *llamacpp
666673
name: "intel-sycl-f32-llama-cpp"
667674
uri: "quay.io/go-skynet/local-ai-backends:latest-gpu-intel-sycl-f32-llama-cpp"
@@ -702,6 +709,11 @@
702709
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-amd-rocm-6-llama-cpp"
703710
mirrors:
704711
- localai/localai-backends:master-gpu-amd-rocm-6-llama-cpp
712+
- !!merge <<: *llamacpp
713+
name: "rocm7-llama-cpp-development"
714+
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-amd-rocm-7-llama-cpp"
715+
mirrors:
716+
- localai/localai-backends:master-gpu-amd-rocm-7-llama-cpp
705717
- !!merge <<: *llamacpp
706718
name: "intel-sycl-f32-llama-cpp-development"
707719
uri: "quay.io/go-skynet/local-ai-backends:master-gpu-intel-sycl-f32-llama-cpp"

0 commit comments

Comments
 (0)