Skip to content

Commit b526ad2

Browse files
authored
Documentation: Further revisions to the Vulkan section in build.md (#14785)
* Documentation: Revised and further improved the Vulkan instructions for Linux users in build.md. * Minor: Revise step 2 of the Vulkan instructions for Linux users in build.md
1 parent 938b785 commit b526ad2

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

docs/build.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -387,12 +387,12 @@ docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/ren
387387

388388
### For Linux users:
389389

390-
First, follow the the official [Getting Started with the Linux Tarball Vulkan SDK](https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.html) guide.
390+
First, follow the official LunarG instructions for the installation and setup of the Vulkan SDK in the [Getting Started with the Linux Tarball Vulkan SDK](https://vulkan.lunarg.com/doc/sdk/latest/linux/getting_started.html) guide.
391391

392392
> [!IMPORTANT]
393393
> After completing the first step, ensure that you have used the `source` command on the `setup_env.sh` file inside of the Vulkan SDK in your current terminal session. Otherwise, the build won't work. Additionally, if you close out of your terminal, you must perform this step again if you intend to perform a build. However, there are ways to make this persistent. Refer to the Vulkan SDK guide linked in the first step for more information about any of this.
394394
395-
Second, after verifying that you have done everything in the Vulkan SDK guide provided in the first step, run the following command to verify that everything is set up correctly:
395+
Second, after verifying that you have followed all of the SDK installation/setup steps, use this command to make sure before proceeding:
396396
```bash
397397
vulkaninfo
398398
```
@@ -403,10 +403,11 @@ cmake -B build -DGGML_VULKAN=1
403403
cmake --build build --config Release
404404
```
405405

406-
Finally, after finishing your build, you should be able to do this:
406+
Finally, after finishing your build, you should be able to do something like this:
407407
```bash
408-
# Test the output binary (with "-ngl 33" to offload all layers to GPU)
409-
./build/bin/llama-cli -m "PATH_TO_MODEL" -p "Hi you how are you" -n 50 -e -ngl 33 -t 4
408+
# Test the output binary
409+
# "-ngl 99" should offload all of the layers to GPU for most (if not all) models.
410+
./build/bin/llama-cli -m "PATH_TO_MODEL" -p "Hi you how are you" -ngl 99
410411

411412
# You should see in the output, ggml_vulkan detected your GPU. For example:
412413
# ggml_vulkan: Using Intel(R) Graphics (ADL GT2) | uma: 1 | fp16: 1 | warp size: 32

0 commit comments

Comments
 (0)