Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/alps/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ These nodes have two [AMD Epyc 7742](https://en.wikichip.org/wiki/amd/epyc/7742)
!!! note "Not all memory is available"
The total memory available to jobs on the nodes is roughly 245 GB and 497 GB on the standard and large memory nodes respectively.

The amount of memory available to your job also depends on the number of MPI ranks per node -- each MPI rank has a memory overhead.
The amount of memory available to your job also depends on the number of MPI ranks per node---each MPI rank has a memory overhead.

A schematic of a *standard memory node* below illustrates the CPU cores and [NUMA nodes](https://www.kernel.org/doc/html/v4.18/vm/numa.html).(1)
{.annotate}
Expand Down Expand Up @@ -129,7 +129,7 @@ The Grizzly Peak blades contain two nodes, where each node has:
* 512 GB DDR4 Memory
* 4 NVIDIA A100 GPUs with 80 GB HBM3 memory each
* The MCH system is the same, except the A100 have 96 GB of memory.
* 4 NICs -- one per GPU.
* 4 NICs---one per GPU.

[](){#ref-alps-mi200-node}
### AMD MI250x GPU Nodes
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/hpcp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The Store (or Project) file system is provided as a space to store datasets, cod
The environment variable `$STORE` can be used as a shortcut to access the Store folder of your primary project.

Hard limits on the amount of data and number of files (inodes) will prevent you from writing to [Store][ref-storage-store] if your quotas are exceeded.
You can check how much data and inodes you are consuming -- and their respective quotas -- by running the [`quota`][ref-storage-quota] command on a login node.
You can check how much data and inodes you are consuming---and their respective quotas---by running the [`quota`][ref-storage-quota] command on a login node.

!!! warning
It is not recommended to write directly to the `$STORE` path from batch jobs.
Expand Down
6 changes: 3 additions & 3 deletions docs/running/slurm.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ The following sections will document how to use Slurm on different compute nodes
To demonstrate the effects different Slurm parameters, we will use a little command line tool [affinity](https://github.com/bcumming/affinity) that prints the CPU cores and GPUs that are assigned to each MPI rank in a job, and which node they are run on.

We strongly recommend using a tool like affinity to understand and test the Slurm configuration for jobs, because the behavior of Slurm is highly dependent on the system configuration.
Parameters that worked on a different cluster -- or with a different Slurm version or configuration on the same cluster -- are not guaranteed to give the same results.
Parameters that worked on a different cluster---or with a different Slurm version or configuration on the same cluster---are not guaranteed to give the same results.

It is straightforward to build the affinity tool to experiment with Slurm configurations.

Expand Down Expand Up @@ -488,7 +488,7 @@ rank 7 @ nid002199: thread 0 -> cores [112:127]

### OpenMP

In the above examples all threads on each -- we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
In the above examples all threads on each---we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
This often gives the best performance, however sometimes it is beneficial to bind threads to explicit cores.

The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assigned to each MPI rank.
Expand Down Expand Up @@ -599,7 +599,7 @@ First ensure that *all* resources are allocated to the whole job with the follow
Next, launch your applications using `srun`, carefully subdividing resources for each job step.
The `--exclusive` flag must be used again, but note that its meaning differs in the context of `srun`.
Here, `--exclusive` ensures that only the resources explicitly requested for a given job step are reserved and allocated to it.
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset -- effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset---effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.

Be sure to background each `srun` command with `&`, so that subsequent job steps start immediately without waiting for previous ones to finish.
A final `wait` command ensures that your submission script does not exit until all job steps complete.
Expand Down
2 changes: 1 addition & 1 deletion docs/software/cw/wrf.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ The [`prgenv-gnu`][ref-uenv-prgenv-gnu] uenv is suitable for building WRF.
```
uenv start prgenv-gnu/24.11:v2 --view=spack
```
In this example we use the latest version of `prgenv-gnu` on Eiger at the time of writing -- check the `prgenv-gnu` [guide][ref-uenv-prgenv-gnu] for the latest version.
In this example we use the latest version of `prgenv-gnu` on Eiger at the time of writing---check the `prgenv-gnu` [guide][ref-uenv-prgenv-gnu] for the latest version.

```bash
# build the latest version provided by the version of Spack used by prgenv-gnu
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@ theme:
markdown_extensions:
- admonition
- def_list
- markdown.extensions.smarty
- pymdownx.details
- pymdownx.superfences
- pymdownx.superfences:
Expand Down
Loading