You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: guide/src/guide/getting_started.md
+14-9Lines changed: 14 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ This section covers how to get started writing GPU crates with `cuda_std` and `c
5
5
## Required Libraries
6
6
7
7
Before you can use the project to write GPU crates, you will need a couple of prerequisites:
8
-
-[The CUDA SDK](https://developer.nvidia.com/cuda-downloads), version `11.2` or higher. This is only for building
8
+
-[The CUDA SDK](https://developer.nvidia.com/cuda-downloads), version `11.2` or higher (and the appropriate driver - [see cuda release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html)) . This is only for building
9
9
GPU crates, to execute built PTX you only need CUDA 9+.
10
10
11
11
- LLVM 7.x (7.0 to 7.4), The codegen searches multiple places for LLVM:
@@ -18,6 +18,8 @@ GPU crates, to execute built PTX you only need CUDA 9+.
18
18
19
19
- You may also need to add `libnvvm` to PATH, the builder should do it for you but in case it does not work, add libnvvm to PATH, it should be somewhere like `CUDA_ROOT/nvvm/bin`,
20
20
21
+
- You may wish to use or consult the bundled [Dockerfile](#docker) to assist in your local config
22
+
21
23
## rust-toolchain
22
24
23
25
Currently, the Codegen only works on nightly (because it uses rustc internals), and it only works on a specific version of nightly.
@@ -194,12 +196,15 @@ Then execute it using cust.
194
196
There is also a [Dockerfile](Dockerfile) prepared as a quickstart with all the necessary libraries for base cuda development.
195
197
196
198
You can use it as follows (assuming your clone of Rust-CUDA is at the absolute path `RUST_CUDA`):
197
-
199
+
- Ensure you have Docker setup to [use gpus](https://docs.docker.com/config/containers/resource_constraints/#gpu)
198
200
- Build `docker build -t rust-cuda $RUST_CUDA`
199
-
- Run `docker run -it -v $RUST_CUDA:/root/rust-cuda --entrypoint /bin/b
200
-
ash rust-cuda`
201
-
202
-
Running will drop you into the container
203
-
s shell and you will find the project at `~/rust-cuda`
204
-
205
-
Note: refer to [rust-toolchain][#rust-toolchain] to ensure you are using the correct toolchain in your project.
201
+
- Run `docker run -it --gpus all -v $RUST_CUDA:/root/rust-cuda --entrypoint /bin/bash rust-cuda`
202
+
* Running will drop you into the container's shell and you will find the project at `~/rust-cuda`
203
+
- If all is well, you'll be able to `cargo run` in `~/rust-cuda/examples/cuda/cpu/add`
204
+
205
+
**Notes:**
206
+
1. refer to [rust-toolchain](#rust-toolchain) to ensure you are using the correct toolchain in your project.
207
+
2. despite using Docker, your machine will still need to be running a compatible driver, in this case for Cuda 11.4.1 it is >=470.57.02
208
+
3. if you have issues within the container, it can help to start ensuring your gpu is recognized
209
+
* ensure `nvidia-smi` provides meaningful output in the container
210
+
* NVidia provides a number of samples https://github.com/NVIDIA/cuda-samples. In particular, you may want to try `make`ing and running the [`deviceQuery`](https://github.com/NVIDIA/cuda-samples/tree/ba04faaf7328dbcc87bfc9acaf17f951ee5ddcf3/Samples/deviceQuery) sample. If all is well you should see many details about your gpu
0 commit comments