Skip to content

Commit eb47bc3

Browse files
committed
Merge branch 'slurm/zen2' of github.com:bcumming/cscs-docs into slurm/zen2
2 parents 2bcea80 + 6ddff4a commit eb47bc3

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/alps/hardware.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,8 +82,8 @@ Each node contains four Grace-Hopper modules and four corresponding network inte
8282

8383
These nodes have two [AMD Epyc 7742](https://en.wikichip.org/wiki/amd/epyc/7742) 64-core CPU sockets, and are used primarily for the [Eiger][ref-cluster-eiger] system. They come in two memory configurations:
8484

85-
* *Standard-memory*: 256 GB in 16x16 GB DDR4 Dimms.
86-
* *Large-memory*: 512 GB in 16x32 GB DDR4 Dimms.
85+
* *Standard-memory*: 256 GB in 16x16 GB DDR4 DIMMs.
86+
* *Large-memory*: 512 GB in 16x32 GB DDR4 DIMMs.
8787

8888
!!! note "Not all memory is available"
8989
The total memory available to jobs on the nodes is roughly 245 GB and 497 GB on the standard and large memory nodes respectively.

docs/running/slurm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ sbatch --account=g123 ./job.sh
4343
```
4444

4545
!!! note
46-
The flag `--account` and `-Cmc` that were required on the old Eiger cluster are no longer required.
46+
The flags `--account` and `-Cmc` that were required on the old Eiger cluster are no longer required.
4747

4848
## Prioritization and scheduling
4949

@@ -62,7 +62,7 @@ At CSCS, SLURM is configured to accommodate the diverse range of node types avai
6262
These nodes vary in architecture, including CPU-only nodes and nodes equipped with different types of GPUs.
6363
Because of this heterogeneity, SLURM must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
6464

65-
Each type of node has different resource constraints and capabilities, which SLURM takes into account when scheduling jobs.
65+
Each type of node has different resource constraints and capabilities, which Slurm takes into account when scheduling jobs.
6666
For example, CPU-only nodes may have configurations optimized for multi-threaded CPU workloads, while GPU nodes require additional parameters to allocate GPU resources efficiently.
6767
SLURM ensures that user jobs request and receive the appropriate resources while preventing conflicts or inefficient utilization.
6868

@@ -317,7 +317,7 @@ The configuration that is optimal for your application may be different.
317317
[NVIDIA's Multi-Process Service (MPS)]: https://docs.nvidia.com/deploy/mps/index.html
318318

319319
[](){#ref-slurm-amdcpu}
320-
## AMD CPU Nodes
320+
## AMD CPU nodes
321321

322322
Alps has nodes with two AMD Epyc Rome CPU sockets per node for CPU-only workloads, most notably in the [Eiger][ref-cluster-eiger] cluster provided by the [HPC Platform][ref-platform-hpcp].
323323
For a detailed description of the node hardware, see the [AMD Rome node][ref-alps-zen2-node] hardware documentation.

0 commit comments

Comments
 (0)