You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ when you can't use the precompiled binary directly, we provide an automated buil
92
92
93
93
### Compiling on Linux (Manual Method)
94
94
- To compile your binaries from source, clone the repo with `git clone https://github.com/LostRuins/koboldcpp.git`
95
-
- A makefile is provided, simply run `make`.
95
+
- A makefile is provided, simply run `make` (when compiling, you can set the number of parallel jobs with the `-j` flag).
96
96
- Optional Vulkan: Link your own install of Vulkan SDK manually with `make LLAMA_VULKAN=1`
97
97
- Optional CLBlast: Link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`
98
98
- Note: for these you will need to obtain and link OpenCL and CLBlast libraries.
@@ -107,7 +107,7 @@ when you can't use the precompiled binary directly, we provide an automated buil
107
107
- You're encouraged to use the .exe released, but if you want to compile your binaries from source at Windows, the easiest way is:
108
108
- Get the latest release of w64devkit (https://github.com/skeeto/w64devkit). Be sure to use the "vanilla one", not i686 or other different stuff. If you try they will conflit with the precompiled libs!
109
109
- Clone the repo with `git clone https://github.com/LostRuins/koboldcpp.git`
110
-
- Make sure you are using the w64devkit integrated terminal, then run `make` at the KoboldCpp source folder. This will create the .dll files for a pure CPU native build.
110
+
- Make sure you are using the w64devkit integrated terminal, then run `make` at the KoboldCpp source folder. This will create the .dll files for a pure CPU native build (when compiling, you can set the number of parallel jobs with the `-j` flag).
111
111
- For a full featured build (all backends), do `make LLAMA_CLBLAST=1 LLAMA_VULKAN=1`. (Note that `LLAMA_CUBLAS=1` will not work on windows, you need visual studio)
112
112
- To make your build sharable and capable of working on other devices, you must use `LLAMA_PORTABLE=1`
113
113
- If you want to generate the .exe file, make sure you have the python module PyInstaller installed with pip (`pip install PyInstaller`). Then run the script `make_pyinstaller.bat`
@@ -122,7 +122,7 @@ when you can't use the precompiled binary directly, we provide an automated buil
122
122
123
123
### Compiling on MacOS
124
124
- You can compile your binaries from source. You can clone the repo with `git clone https://github.com/LostRuins/koboldcpp.git`
125
-
- A makefile is provided, simply run `make`.
125
+
- A makefile is provided, simply run `make` (when compiling, you can set the number of parallel jobs with the `-j` flag).
126
126
- If you want Metal GPU support, instead run `make LLAMA_METAL=1`, note that MacOS metal libraries need to be installed.
127
127
- To make your build sharable and capable of working on other devices, you must use `LLAMA_PORTABLE=1`
128
128
- After all binaries are built, you can run the python script with the command `python koboldcpp.py --model [ggml_model.gguf]` (and add `--gpulayers (number of layer)` if you wish to offload layers to GPU).
0 commit comments