You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main product of this project is the `llama` library. Its C-style interface can be found in [include/llama.h](include/llama.h).
4
+
5
+
The project also includes many example programs and tools using the `llama` library. The examples range from simple, minimal code snippets to sophisticated sub-projects such as an OpenAI-compatible HTTP server.
6
+
7
+
**To get the code:**
8
+
9
+
```bash
10
+
git clone https://github.com/ggml-org/llama.cpp
11
+
cd llama.cpp
12
+
```
13
+
14
+
## CPU Build with BLAS
15
+
16
+
Building llama.cpp with BLAS support is highly recommended as it has shown to provide performance improvements.
17
+
18
+
```bash
19
+
cmake -S . -B build \
20
+
-DCMAKE_BUILD_TYPE=Release \
21
+
-DGGML_BLAS=ON \
22
+
-DGGML_BLAS_VENDOR=OpenBLAS
23
+
24
+
cmake --build build --config Release -j $(nproc)
25
+
```
26
+
27
+
**Notes**:
28
+
- For faster repeated compilation, install [ccache](https://ccache.dev/)
29
+
- For debug builds:
30
+
31
+
```bash
32
+
cmake -S . -B build \
33
+
-DCMAKE_BUILD_TYPE=Debug \
34
+
-DGGML_BLAS=ON \
35
+
-DGGML_BLAS_VENDOR=OpenBLAS
36
+
37
+
cmake --build build --config Debug -j $(nproc)
38
+
```
39
+
40
+
- For static builds, add `-DBUILD_SHARED_LIBS=OFF`:
0 commit comments