Skip to content
Merged
40 changes: 40 additions & 0 deletions docs/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,46 @@ The following compilation options are also available to tweak performance (yes,

**Windows**

#### Git Bash MINGW64

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Thanks for providing these build instructions.

May I ask why Git Bash recommended here instead of just using the native powershell? I tried with just installing vulkan and clone the repo (I already had vs2022 installed), cmake can automatically find vulkan, everything builds fine in powershell.


[![](https://i.imgur.com/M8dnyk5.png)](https://git-scm.com/downloads/win).

Download and install [`Git-SCM`](https://git-scm.com/downloads/win) with the default settings

[![](https://i.imgur.com/hiAMq2S.png)](https://visualstudio.microsoft.com/)

Download and install [`Visual Studio Community Edition`](https://visualstudio.microsoft.com/) and make sure you select `C++`

[![](https://i.imgur.com/piiceV6.png)](https://cmake.org/download/)

Download and install [`CMake`](https://cmake.org/download/) with the default settings

[![](https://i.imgur.com/cN0LDwJ.png)](https://vulkan.lunarg.com/sdk/home#windows)

Download and install the [`Vulkan SDK`](https://vulkan.lunarg.com/sdk/home#windows) with the default settings.

![](https://i.imgur.com/qTl7jbF.png)

Go into your `llama.cpp` directory and right click, select `Open Git Bash Here` and then run the following commands

![](https://i.imgur.com/7UQ7Rf1.png)

```
cmake -B build -DGGML_VULKAN=ON
```

![](https://i.imgur.com/GlJnitl.png)

```
cmake --build build --config Release
```

Now you can load the model in conversation mode using `Vulkan`

```
build/bin/release/llama-cli -m "[PATH TO MODEL]" -ngl 100 -c 16384 -t 10 -n -2 -cnv
```

#### w64devkit

Download and extract [w64devkit](https://github.com/skeeto/w64devkit/releases).
Expand Down