Skip to content

Commit 2da0daf

Browse files
danieljvickersDaniel VickersDaniel Vickerswilfonba
authored
Resolve nvhpc 25 3 (#1020)
Co-authored-by: Daniel Vickers <[email protected]> Co-authored-by: Daniel Vickers <[email protected]> Co-authored-by: Ben Wilfong <[email protected]>
1 parent 199aaeb commit 2da0daf

File tree

6 files changed

+74
-4
lines changed

6 files changed

+74
-4
lines changed

benchmarks/5eq_rk3_weno3_hllc/case.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
c_w = math.sqrt(gamw * (p0w + piw) / rho0w)
8282

8383
# Shock Mach number of interest. Note that the post-shock properties can be defined in terms of either
84-
# Min or psOp0a. Just comment/uncomment appropriatelly
84+
# Min or psOp0a. Just comment/uncomment appropriately
8585
Min = 2.4
8686

8787
## Pos to pre shock ratios - AIR

examples/2D_axisym_shockwatercavity/case.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
c_w = math.sqrt(gamw * (p0w + piw) / rho0w)
6060

6161
# Shock Mach number of interest. Note that the post-shock properties can be defined in terms of either
62-
# Min or psOp0a. Just comment/uncomment appropriatelly
62+
# Min or psOp0a. Just comment/uncomment appropriately
6363
Min = 2.146
6464

6565
## Pos to pre shock ratios - AIR

examples/3D_shockdroplet/case.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
c_w = math.sqrt(gamw * (p0w + piw) / rho0w)
6060

6161
# Shock Mach number of interest. Note that the post-shock properties can be defined in terms of either
62-
# Min or psOp0a. Just comment/uncomment appropriatelly
62+
# Min or psOp0a. Just comment/uncomment appropriately
6363
Min = 2.4
6464

6565
## Pos to pre shock ratios - AIR

examples/3D_shockdroplet_muscl/case.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
c_w = math.sqrt(gamw * (p0w + piw) / rho0w)
6060

6161
# Shock Mach number of interest. Note that the post-shock properties can be defined in terms of either
62-
# Min or psOp0a. Just comment/uncomment appropriatelly
62+
# Min or psOp0a. Just comment/uncomment appropriately
6363
Min = 2.4
6464

6565
## Pos to pre shock ratios - AIR

toolchain/modules

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,3 +88,12 @@ n-gpu CC=nvc CXX=nvc++ FC=nvfortran
8888
san CSCS Santis
8989
san-all cmake python
9090
san-gpu nvhpc cuda cray-mpich
91+
92+
h hipergator
93+
h-gpu nvhpc/25.9
94+
h-gpu CUDA_HOME="/apps/compilers/cuda/12.8.1"
95+
h-all HPC_OMPI_DIR="/apps/mpi/cuda/12.8.1/nvhpc/25.3/openmpi/5.0.7"
96+
h-all HPC_OMPI_BIN="/apps/mpi/cuda/12.8.1/nvhpc/25.3/openmpi/5.0.7/bin"
97+
h-all OMPI_MCA_pml=ob1 OMPI_MCA_coll_hcoll_enable=0
98+
h-gpu PATH="/apps/mpi/cuda/12.8.1/nvhpc/25.3/openmpi/5.0.7/bin:${PATH}"
99+
h-gpu MFC_CUDA_CC=100 NVHPC_CUDA_HOME="/apps/compilers/cuda/12.8.1"
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
#!/usr/bin/env bash
2+
3+
<%namespace name="helpers" file="helpers.mako"/>
4+
5+
% if engine == 'batch':
6+
#SBATCH --nodes=${nodes}
7+
#SBATCH --ntasks-per-node=${tasks_per_node}
8+
#SBATCH --job-name="${name}"
9+
#SBATCH --output="${name}.out"
10+
#SBATCH --time=${walltime}
11+
#SBATCH --cpus-per-task=7
12+
% if gpu:
13+
#SBATCH --gpus-per-task=1
14+
#SBATCH --gpu-bind=closest
15+
% endif
16+
% if account:
17+
#SBATCH --account=${account}
18+
% endif
19+
% if partition:
20+
#SBATCH --partition=${partition}
21+
% else:
22+
#SBATCH --partition=hpg-b200
23+
% endif
24+
% if quality_of_service:
25+
#SBATCH --qos=${quality_of_service}
26+
% endif
27+
% if email:
28+
#SBATCH --mail-user=${email}
29+
#SBATCH --mail-type="BEGIN, END, FAIL"
30+
% endif
31+
% endif
32+
33+
${helpers.template_prologue()}
34+
35+
ok ":) Loading modules:\n"
36+
cd "${MFC_ROOT_DIR}"
37+
% if engine == 'batch':
38+
. ./mfc.sh load -c h -m ${'g' if gpu else 'c'}
39+
% endif
40+
cd - > /dev/null
41+
echo
42+
43+
44+
% for target in targets:
45+
${helpers.run_prologue(target)}
46+
47+
% if not mpi:
48+
(set -x; ${profiler} "${target.get_install_binpath(case)}")
49+
% else:
50+
(set -x; ${profiler} \
51+
mpirun -np ${nodes*tasks_per_node} \
52+
--bind-to none \
53+
"${target.get_install_binpath(case)}")
54+
% endif
55+
56+
${helpers.run_epilogue(target)}
57+
58+
echo
59+
% endfor
60+
61+
${helpers.template_epilogue()}

0 commit comments

Comments
 (0)