Skip to content

Commit 72e564e

Browse files
committed
docs: change CUDA 12.2 to CUDA 12.4
1 parent 6102fd8 commit 72e564e

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

docs/guide/CUDA.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@ description: CUDA support in node-llama-cpp
99
and these are automatically used when CUDA is detected on your machine.
1010

1111
To use `node-llama-cpp`'s CUDA support with your NVIDIA GPU,
12-
make sure you have [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) 12.2 or higher installed on your machine.
12+
make sure you have [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) 12.4 or higher installed on your machine.
1313

1414
If the pre-built binaries don't work with your CUDA installation,
1515
`node-llama-cpp` will automatically download a release of `llama.cpp` and build it from source with CUDA support.
1616
Building from source with CUDA support is slow and can take up to an hour.
1717

18-
The pre-built binaries are compiled with CUDA Toolkit 12.2,
19-
so any version of CUDA Toolkit that is 12.2 or higher should work with the pre-built binaries.
18+
The pre-built binaries are compiled with CUDA Toolkit 12.4,
19+
so any version of CUDA Toolkit that is 12.4 or higher should work with the pre-built binaries.
2020
If you have an older version of CUDA Toolkit installed on your machine,
2121
consider updating it to avoid having to wait the long build time.
2222

@@ -42,7 +42,7 @@ You should see an output like this:
4242
If you see `CUDA used VRAM` in the output, it means that CUDA support is working on your machine.
4343

4444
## Prerequisites
45-
* [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) 12.2 or higher
45+
* [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) 12.4 or higher
4646
* [`cmake-js` dependencies](https://github.com/cmake-js/cmake-js#:~:text=projectRoot/build%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Bstring%5D-,Requirements%3A,-CMake)
4747
* [CMake](https://cmake.org/download/) 3.26 or higher (optional, recommended if you have build issues)
4848

@@ -81,14 +81,14 @@ To build `node-llama-cpp` with any of these options, set an environment variable
8181
### Fix the `Failed to detect a default CUDA architecture` Build Error
8282
To fix this issue you have to set the `CUDACXX` environment variable to the path of the `nvcc` compiler.
8383

84-
For example, if you have installed CUDA Toolkit 12.2, you have to run a command like this:
84+
For example, if you have installed CUDA Toolkit 12.4, you have to run a command like this:
8585
::: code-group
8686
```shell [Linux]
87-
export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
87+
export CUDACXX=/usr/local/cuda-12.4/bin/nvcc
8888
```
8989

9090
```cmd [Windows]
91-
set CUDACXX=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin\nvcc.exe
91+
set CUDACXX=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin\nvcc.exe
9292
```
9393
:::
9494

0 commit comments

Comments
 (0)