|
1 | 1 | # EESSI Guide |
2 | 2 |
|
3 | | -## How to load EESSI |
4 | | -Loading an EESSI environment module: |
5 | | -```[bash] |
6 | | -source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash |
7 | | -``` |
8 | | -Activating EESSI environment: |
| 3 | +## How to Load EESSI |
| 4 | +EESSI can be initialised using the following method. |
| 5 | + |
| 6 | +The EESSI environment is sourced (in a non-reversible way) by running: |
9 | 7 | ```[bash] |
10 | 8 | source /cvmfs/software.eessi.io/versions/2023.06/init/bash |
11 | 9 | ``` |
12 | 10 |
|
13 | | -## GPU Support with EESI |
14 | | -To enable GPU support you need a site-specific build that has `cuda` enabled. For guide to do this please refer to `docs/image-build`. This is |
15 | | -because CUDA-drivers are host specific and EESSI can not ship NVIDIA drivers due to licensing + kernel specific constraints. This means that the |
16 | | -host must provide the drivers in a known location (`host_injections`). |
| 11 | +This is non-reversible because it: |
| 12 | +- Changes your `$PATH`, `$MODULEPATH`, `$LD_LIBRARY_PATH`, and other critical environment variables. |
| 13 | +- Sets EESSI-specific variables such as `EESSI_ROOT`. |
| 14 | + |
| 15 | +This is the recommended method because it: |
| 16 | +- Detects your CPU architecture and OS. |
| 17 | +- Detects and configures GPU support. |
| 18 | +- Prepares the full EESSI software stack. |
| 19 | +- Sets up Lmod (environment module system). |
| 20 | + |
| 21 | +The [EESSI docs](https://www.eessi.io/docs/using_eessi/setting_up_environment/) offer another method to load EESSI, in addition to one above. The alternative method only initialises the Lmod module system and does not load a platform-specific setup. For these reasons, it is recommened to use the method detailed above. |
| 22 | + |
| 23 | +Successful environment setup will show `{EESSI 2023.06}` at the start of your CLI. |
| 24 | + |
| 25 | +To deactivate your EESSI environment you can either restart your shell using `exec bash` or exit the shell by `exit`. |
| 26 | + |
| 27 | +## GPU Support with EESSI |
| 28 | +To enable GPU support, you need a site-specific build that has CUDA enabled. For a guide on how to do this, please refer to [docs/image-build.md](../image-build.md). |
17 | 29 |
|
18 | 30 | ### Using GPUs |
19 | | -All CUDA-enabled software in EESSI expects CUDA drivers in a specific `host_injections` subdirectory.<br> |
20 | | -```[bash] |
21 | | -ls -l /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc |
22 | | -``` |
23 | | -The output of this should show a symlink to the EESSI `host_injections` dir like so: |
24 | | -```[bash] |
25 | | -lrwxrwxrwx 1 cvmfs cvmfs 109 May 6 2024 /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc |
26 | | --> /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc |
27 | | -``` |
28 | | -To expose the Nvidia GPU drivers. |
| 31 | +All CUDA-enabled software in EESSI expects CUDA drivers in a specific `host_injections` directory. |
| 32 | + |
| 33 | +#### To expose the NVIDIA GPU drivers: |
| 34 | +Use the `link_nvidia_host_libraries.sh` script, provided by EESSI, to symlink your GPU drivers into `host_injections`. |
29 | 35 | ```[bash] |
30 | 36 | /cvmfs/software.eessi.io/versions/2023.06/scripts/gpu_support/nvidia/link_nvidia_host_libraries.sh |
31 | 37 | ``` |
| 38 | +Rerun this script when your NVIDIA GPU drivers are updated. It is also safe to rerun at any time as the script will detect if the driver versions have already been symlinked. |
| 39 | + |
| 40 | +### Building with GPUs |
32 | 41 |
|
33 | | -### Buidling with GPUs |
| 42 | +Run `which nvcc` to confirm that the CUDA compiler is found. |
34 | 43 |
|
35 | | -Checking `nvcc --version` and `which nvcc` to see if `CUDA` compiler is found.<br> |
36 | | -<br> |
37 | | -If `nvcc` not found run:<br> |
| 44 | +If `nvcc` is not found, add the CUDA path to your environment: |
38 | 45 | ```[bash] |
39 | | -export PATH=/usr/local/cuda-13.0/bin:$PATH |
| 46 | +export PATH=/usr/local/cuda/bin:$PATH |
40 | 47 | ``` |
41 | | -(with your specific cuda version)<br> |
42 | | -`which nvcc` should now show path to compiler.<br> |
43 | | -<br> |
44 | | -Running `which gcc` will give path `.../2023.06/compat...`<br> |
45 | | -Loading EESSI module (It is important to load a `gcc` that is compatible with the host's CUDA version.):<br> |
| 48 | + |
| 49 | +`which nvcc` should now show the path to the CUDA compiler. |
| 50 | + |
| 51 | +#### Loading EESSI module for the GCC compiler |
| 52 | + |
| 53 | +Running `which gcc` with EESSI initialised should initially show a path `.../2023.06/compat...` which points to the compatibility compiler. |
| 54 | +It is important to load a `gcc` version that is compatible with the host's CUDA version: |
46 | 55 | ```[bash] |
47 | 56 | module load GCC/12.3.0 |
48 | 57 | ``` |
49 | | -Now running `which gcc` will give path `.../2023.06/software...`<br> |
50 | | -<br> |
51 | | -Now you can run `cmake` and `make` to compile `CUDA` using EESSI's `gcc`.<br> |
| 58 | +Running `which gcc` will now give a path `.../2023.06/software...` which is the full compiler provided by EESSI. This is what we want for CUDA builds. |
| 59 | + |
| 60 | +Now you can run `cmake` and `make` to compile CUDA using EESSI's `gcc`. |
52 | 61 |
|
53 | | -#### Test setup: Compile deviceQuery from CUDA-Samples |
54 | | -To test that your EESSI set up can compile `CUDA`, try compiling deviceQuery from CUDA-Samples with the following steps:<br> |
| 62 | +#### Test: Compile deviceQuery from CUDA-Samples |
| 63 | +To test that your EESSI setup can compile CUDA, try compiling `deviceQuery` from CUDA-Samples with the following steps: |
55 | 64 | ```[bash] |
56 | 65 | git clone https://github.com/NVIDIA/cuda-samples.git |
57 | 66 | cd cuda-samples/Samples/1_Utilities/deviceQuery |
|
0 commit comments