Skip to content

Commit 307772f

Browse files
authored
readme : add RVV,ZVFH,ZFH,ZICBOP support for RISC-V (#17259)
Signed-off-by: Wang Yang <[email protected]>
1 parent f1bad23 commit 307772f

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

.github/copilot-instructions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ llama.cpp is a large-scale C/C++ project for efficient LLM (Large Language Model
99
- **Size**: ~200k+ lines of code across 1000+ files
1010
- **Architecture**: Modular design with main library (`libllama`) and 40+ executable tools/examples
1111
- **Core dependency**: ggml tensor library (vendored in `ggml/` directory)
12-
- **Backends supported**: CPU (AVX/NEON optimized), CUDA, Metal, Vulkan, SYCL, ROCm, MUSA
12+
- **Backends supported**: CPU (AVX/NEON/RVV optimized), CUDA, Metal, Vulkan, SYCL, ROCm, MUSA
1313
- **License**: MIT
1414

1515
## Build Instructions

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@ range of hardware - locally and in the cloud.
6161
- Plain C/C++ implementation without any dependencies
6262
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
6363
- AVX, AVX2, AVX512 and AMX support for x86 architectures
64+
- RVV, ZVFH, ZFH and ZICBOP support for RISC-V architectures
6465
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
6566
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
6667
- Vulkan and SYCL backend support

0 commit comments

Comments
 (0)