Skip to content

Commit bd8c5a8

Browse files
committed
more build.md updates
1 parent bbff53a commit bd8c5a8

File tree

1 file changed

+21
-21
lines changed

1 file changed

+21
-21
lines changed

docs/build.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ cd llama.cpp
99

1010
The following sections describe how to build with different backends and options.
1111

12-
## CPU-only Build
12+
## CPU Build
1313

1414
Build llama.cpp using `CMake`:
1515

@@ -20,8 +20,8 @@ cmake --build build --config Release
2020

2121
**Notes**:
2222

23-
- For faster compilation, add the `-j` argument to run multiple jobs in parallel. For example, `cmake --build build --config Release -j 8` will run 8 jobs in parallel.
24-
- For faster repeated compilation, install [ccache](https://ccache.dev/).
23+
- For faster compilation, add the `-j` argument to run multiple jobs in parallel, or use a generator that does this automatically such as Ninja. For example, `cmake --build build --config Release -j 8` will run 8 jobs in parallel.
24+
- For faster repeated compilation, install [ccache](https://ccache.dev/)
2525
- For debug builds, there are two cases:
2626

2727
1. Single-config generators (e.g. default = `Unix Makefiles`; note that they just ignore the `--config` flag):
@@ -37,6 +37,9 @@ cmake --build build --config Release
3737
cmake -B build -G "Xcode"
3838
cmake --build build --config Debug
3939
```
40+
41+
For more details and a list of supported generators, see the [CMake documentation](https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html).
42+
4043
- Building for Windows (x86, x64 and arm64) with MSVC or clang as compilers:
4144
- Install Visual Studio 2022, e.g. via the [Community Edition](https://visualstudio.microsoft.com/de/vs/community/). In the installer, select at least the following options (this also automatically installs the required additional tools like CMake,...):
4245
- Tab Workload: Desktop-development with C++
@@ -47,19 +50,18 @@ cmake --build build --config Release
4750
cmake --preset arm64-windows-llvm-release -D GGML_OPENMP=OFF
4851
cmake --build build-arm64-windows-llvm-release
4952
```
50-
Note: Building for arm64 could also be done just with MSVC (with the build-arm64-windows-MSVC preset, or the standard CMake build instructions). But MSVC does not support inline ARM assembly-code, used e.g. for the accelerated Q4_0_4_8 CPU kernels.
53+
Building for arm64 can also be done with the MSVC compiler with the build-arm64-windows-MSVC preset, or the standard CMake build instructions. However, note that the MSVC compiler does not support inline ARM assembly code, used e.g. for the accelerated Q4_0_4_8 CPU kernels.
5154

5255
## Metal Build
5356

5457
On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU.
5558
To disable the Metal build at compile time use the `-DGGML_METAL=OFF` cmake option.
5659

57-
When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers|-ngl 0` command-line
58-
argument.
60+
When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers 0` command-line argument.
5961

6062
## BLAS Build
6163

62-
Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). Support with CPU-only BLAS implementations doesn't affect the normal generation performance. We may see generation performance improvements with GPU-involved BLAS implementations, e.g. cuBLAS, hipBLAS. There are currently several different BLAS implementations available for build and use:
64+
Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). Using BLAS doesn't affect the generation performance. There are currently several different BLAS implementations available for build and use:
6365
6466
### Accelerate Framework:
6567
@@ -80,15 +82,15 @@ This provides BLAS acceleration using only the CPU. Make sure to have OpenBLAS i
8082

8183
Check [BLIS.md](./backend/BLIS.md) for more information.
8284

83-
### SYCL
85+
## SYCL
8486

8587
SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators.
8688

8789
llama.cpp based on SYCL is used to **support Intel GPU** (Data Center Max series, Flex series, Arc series, Built-in GPU and iGPU).
8890

8991
For detailed info, please refer to [llama.cpp for SYCL](./backend/SYCL.md).
9092

91-
### Intel oneMKL
93+
## Intel oneMKL
9294

9395
Building through oneAPI compilers will make avx_vnni instruction set available for intel processors that do not support avx512 and avx512_vnni. Please note that this build config **does not support Intel GPU**. For Intel GPU support, please refer to [llama.cpp for SYCL](./backend/SYCL.md).
9496

@@ -105,11 +107,9 @@ Building through oneAPI compilers will make avx_vnni instruction set available f
105107

106108
Check [Optimizing and Running LLaMA2 on Intel® CPU](https://www.intel.com/content/www/us/en/content-details/791610/optimizing-and-running-llama2-on-intel-cpu.html) for more information.
107109

108-
### CUDA
109-
110-
This provides GPU acceleration using the CUDA cores of your Nvidia GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager (e.g. `apt install nvidia-cuda-toolkit`) or from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads).
110+
## CUDA
111111

112-
For Jetson user, if you have Jetson Orin, you can try this: [Offical Support](https://www.jetson-ai-lab.com/tutorial_text-generation.html). If you are using an old model(nano/TX2), need some additional operations before compiling.
112+
This provides GPU acceleration using an NVIDIA GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager (e.g. `apt install nvidia-cuda-toolkit`) or from here: [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads).
113113
114114
- Using `CMake`:
115115
@@ -132,7 +132,7 @@ The following compilation options are also available to tweak performance:
132132
| GGML_CUDA_PEER_MAX_BATCH_SIZE | Positive integer | 128 | Maximum batch size for which to enable peer access between multiple GPUs. Peer access requires either Linux or NVLink. When using NVLink enabling peer access for larger batch sizes is potentially beneficial. |
133133
| GGML_CUDA_FA_ALL_QUANTS | Boolean | false | Compile support for all KV cache quantization type (combinations) for the FlashAttention CUDA kernels. More fine-grained control over KV cache size but compilation takes much longer. |
134134
135-
### MUSA
135+
## MUSA
136136
137137
This provides GPU acceleration using the MUSA cores of your Moore Threads MTT GPU. Make sure to have the MUSA SDK installed. You can download it from here: [MUSA SDK](https://developer.mthreads.com/sdk/download/musa).
138138
@@ -149,7 +149,7 @@ The environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` can be used to enab
149149
150150
Most of the compilation options available for CUDA should also be available for MUSA, though they haven't been thoroughly tested yet.
151151

152-
### HIP
152+
## HIP
153153

154154
This provides GPU acceleration on HIP-supported AMD GPUs.
155155
Make sure to have ROCm installed.
@@ -192,11 +192,11 @@ You can download it from your Linux distro's package manager or from here: [ROCm
192192
The environment variable [`HIP_VISIBLE_DEVICES`](https://rocm.docs.amd.com/en/latest/understand/gpu_isolation.html#hip-visible-devices) can be used to specify which GPU(s) will be used.
193193
If your GPU is not officially supported you can use the environment variable [`HSA_OVERRIDE_GFX_VERSION`] set to a similar GPU, for example 10.3.0 on RDNA2 (e.g. gfx1030, gfx1031, or gfx1035) or 11.0.0 on RDNA3.
194194
195-
### Vulkan
195+
## Vulkan
196196
197197
**Windows**
198198
199-
#### w64devkit
199+
### w64devkit
200200
201201
Download and extract [`w64devkit`](https://github.com/skeeto/w64devkit/releases).
202202
@@ -223,7 +223,7 @@ cmake -B build -DGGML_VULKAN=ON
223223
cmake --build build --config Release
224224
```
225225
226-
#### Git Bash MINGW64
226+
### Git Bash MINGW64
227227
228228
Download and install [`Git-SCM`](https://git-scm.com/downloads/win) with the default settings
229229
@@ -246,7 +246,7 @@ Now you can load the model in conversation mode using `Vulkan`
246246
build/bin/Release/llama-cli -m "[PATH TO MODEL]" -ngl 100 -c 16384 -t 10 -n -2 -cnv
247247
```
248248
249-
#### MSYS2
249+
### MSYS2
250250
Install [MSYS2](https://www.msys2.org/) and then run the following commands in a UCRT terminal to install dependencies.
251251
```sh
252252
pacman -S git \
@@ -305,7 +305,7 @@ cmake --build build --config Release
305305
# ggml_vulkan: Using Intel(R) Graphics (ADL GT2) | uma: 1 | fp16: 1 | warp size: 32
306306
```
307307

308-
### CANN
308+
## CANN
309309
This provides NPU acceleration using the AI cores of your Ascend NPU. And [CANN](https://www.hiascend.com/en/software/cann) is a hierarchical APIs to help you to quickly build AI applications and service based on Ascend NPU.
310310

311311
For more information about Ascend NPU in [Ascend Community](https://www.hiascend.com/en/).
@@ -332,6 +332,6 @@ llama_new_context_with_model: CANN compute buffer size = 1260.81 MiB
332332

333333
For detailed info, such as model/device supports, CANN install, please refer to [llama.cpp for CANN](./backend/CANN.md).
334334

335-
### Android
335+
## Android
336336

337337
To read documentation for how to build on Android, [click here](./android.md)

0 commit comments

Comments
 (0)