Skip to content

Commit b483676

Browse files
committed
update daint-gpu configuration
1 parent 7dbacd0 commit b483676

File tree

1 file changed

+24
-16
lines changed

1 file changed

+24
-16
lines changed

configs/cscs/daint/gpu/craype_config

Lines changed: 24 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,35 +3,45 @@
33
# Author: Samuel Omlin, CSCS (omlins)
44
#
55
# Description: Definition of site specific variables and call of JUHPC.
6-
# Site: ALPS:todi, Swiss National Supercomputing Centre (CSCS)
6+
# Site: Piz Daint:gpu, Swiss National Supercomputing Centre (CSCS)
77
# Base: craype
88

99

10-
# Load required modules (including correct CPU and GPU target modules)
10+
# Load required modules, including correct CPU and GPU target modules (NOTE: the same modules should be loaded when running julia - JUHPC can be used in a module build recipe...)
1111
module load daint-gpu
1212
module switch PrgEnv-cray PrgEnv-gnu
13-
module load cudatoolkit craype-accel-nvidia60
13+
module load cudatoolkit/11.2.0_3.39-2.1__gf93aa1c craype-accel-nvidia60 # Load latest available cudatoolkit
1414
module load cray-hdf5-parallel
1515
module list
1616

1717

1818
# Environment variables for HPC key packages that require system libraries that require system libraries (MPI.jl, CUDA.jl, HDF5.jl and ADIOS2.jl)
19-
export JUHPC_CUDA_HOME=$CUDA_HOME # Used for CUDA.jl runtime discovery (set as CUDA_HOME in the uenv view / activate script).
20-
export JUHPC_CUDA_RUNTIME_VERSION=$CRAY_CUDATOOLKIT_VERSION # Used for CUDA.jl runtime version definition (set in preferences).
21-
export JUHPC_ROCM_HOME= # Used for AMDGPU.jl runtime discovery (set as ROCM_PATH in the uenv view / activate script).
22-
export JUHPC_MPI_HOME=$MPICH_DIR # Used for MPI.jl system binary discovery (set in preferences).
23-
export JUHPC_MPI_VENDOR= # Used for MPI.jl system binary discovery (used to set preferences).
24-
export JUHPC_MPI_EXEC="srun -C gpu" # Used for MPI.jl exec command discovery (set in preferences). Arguments are space separated, e.g. "srun -C gpu".
25-
export JUHPC_HDF5_HOME=$HDF5_DIR # Used for HDF5.jl library discovery (set in preferences).
26-
export JUHPC_ADIOS2_HOME= # Used for ADIOS2.jl library discovery (set as JULIA_ADIOS2_PATH in the uenv view / activate script).
19+
export JUHPC_CUDA_HOME=$CUDA_HOME # Used for CUDA.jl runtime discovery (set as CUDA_HOME in the activate script).
20+
export JUHPC_CUDA_RUNTIME_VERSION=$CRAY_CUDATOOLKIT_VERSION # Used for CUDA.jl runtime version definition (set in preferences).
21+
export JUHPC_ROCM_HOME= # Used for AMDGPU.jl runtime discovery (set as ROCM_PATH in the activate script).
22+
export JUHPC_MPI_HOME=$MPICH_DIR # Used for MPI.jl system binary discovery (set in preferences).
23+
export JUHPC_MPI_VENDOR= # Used for MPI.jl system binary discovery (used to set preferences).
24+
export JUHPC_MPI_EXEC="srun -C gpu" # Used for MPI.jl exec command discovery (set in preferences). Arguments are space separated, e.g. "srun -C gpu".
25+
export JUHPC_HDF5_HOME=$HDF5_DIR # Used for HDF5.jl library discovery (set in preferences).
26+
export JUHPC_ADIOS2_HOME= # Used for ADIOS2.jl library discovery (set as JULIA_ADIOS2_PATH in the activate script).
27+
28+
29+
# Create site-specific post-install script (currently MPIPreferences does not provide an option to set required preloads if not automatically detected; JUHPC_MPI_VENDOR fails on Piz Daint...)
30+
JUHPC_POST_INSTALL_JL=./post_install.jl
31+
echo 'using Preferences
32+
set_preferences!("MPIPreferences",
33+
"preloads" => ["libcuda.so", "libcudart.so"],
34+
"preloads_env_switch" => "MPICH_RDMA_ENABLED_CUDA";
35+
force=true
36+
)' > $JUHPC_POST_INSTALL_JL
2737

2838

2939
# Call JUHPC
3040
git clone https://github.com/omlins/JUHPC
3141
JUHPC=./JUHPC/src/juhpc
32-
JUHPC_SETUP_INSTALLDIR=$SCRATCH/${HOSTNAME%%-*}/juhpc_setup # HPC setup installation environment variables must be expanded during installation.
33-
JULIAUP_INSTALLDIR="\$SCRATCH/\${HOSTNAME%%-*}/juliaup" # User environment variables SCRATCH and HOSTNAME must not be expanded during uenv build, but during usage. Separate installation by HOSTNAME is required, because different hosts with different architectures can share the same file system (e.g., daint and eiger on ALPS).
34-
bash -l $JUHPC $JUHPC_SETUP_INSTALLDIR $JULIAUP_INSTALLDIR
42+
JUHPC_SETUP_INSTALLDIR=$SCRATCH/../julia/${HOSTNAME%%[0-9]*}-gpu/juhpc_setup # HPC setup installation environment variables must be expanded during installation.
43+
JULIAUP_INSTALLDIR="\$SCRATCH/../julia/\$USER/\${HOSTNAME%%[0-9]*}-gpu/juliaup" # User environment variables SCRATCH and HOSTNAME must not be expanded HPC setup installation, but during usage. Separate installation by HOSTNAME is required, because different hosts with different architectures can share the same file system (e.g., daint and eiger on ALPS).
44+
bash -l $JUHPC $JUHPC_SETUP_INSTALLDIR $JULIAUP_INSTALLDIR $JUHPC_POST_INSTALL_JL
3545

3646

3747
# Activate the HPC setup environment variables
@@ -58,5 +68,3 @@ julia -e 'using Pkg; Pkg.add("HDF5"); using HDF5; @show HDF5.has_parallel()'
5868
# Test CUDA-aware MPI
5969
cd ~/cudaaware
6070
MPICH_GPU_SUPPORT_ENABLED=1 srun -Acsstaff -C'gpu' -N2 -n2 julia cudaaware.jl
61-
62-
# julia -e 'import Pkg; Pkg.test("MPI"; test_args=["--backend=CUDA"])'

0 commit comments

Comments
 (0)