You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/alps/hardware.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,8 +82,8 @@ Each node contains four Grace-Hopper modules and four corresponding network inte
82
82
83
83
These nodes have two [AMD Epyc 7742](https://en.wikichip.org/wiki/amd/epyc/7742) 64-core CPU sockets, and are used primarily for the [Eiger][ref-cluster-eiger] system. They come in two memory configurations:
84
84
85
-
**Standard-memory*: 256 GB in 16x16 GB DDR4 Dimms.
86
-
**Large-memory*: 512 GB in 16x32 GB DDR4 Dimms.
85
+
**Standard-memory*: 256 GB in 16x16 GB DDR4 DIMMs.
86
+
**Large-memory*: 512 GB in 16x32 GB DDR4 DIMMs.
87
87
88
88
!!! note "Not all memory is available"
89
89
The total memory available to jobs on the nodes is roughly 245 GB and 497 GB on the standard and large memory nodes respectively.
Copy file name to clipboardExpand all lines: docs/running/slurm.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ sbatch --account=g123 ./job.sh
43
43
```
44
44
45
45
!!! note
46
-
The flag`--account` and `-Cmc` that were required on the old Eiger cluster are no longer required.
46
+
The flags`--account` and `-Cmc` that were required on the old Eiger cluster are no longer required.
47
47
48
48
## Prioritization and scheduling
49
49
@@ -62,7 +62,7 @@ At CSCS, SLURM is configured to accommodate the diverse range of node types avai
62
62
These nodes vary in architecture, including CPU-only nodes and nodes equipped with different types of GPUs.
63
63
Because of this heterogeneity, SLURM must be tailored to ensure efficient resource allocation, job scheduling, and workload management specific to each node type.
64
64
65
-
Each type of node has different resource constraints and capabilities, which SLURM takes into account when scheduling jobs.
65
+
Each type of node has different resource constraints and capabilities, which Slurm takes into account when scheduling jobs.
66
66
For example, CPU-only nodes may have configurations optimized for multi-threaded CPU workloads, while GPU nodes require additional parameters to allocate GPU resources efficiently.
67
67
SLURM ensures that user jobs request and receive the appropriate resources while preventing conflicts or inefficient utilization.
68
68
@@ -317,7 +317,7 @@ The configuration that is optimal for your application may be different.
317
317
[NVIDIA's Multi-Process Service (MPS)]: https://docs.nvidia.com/deploy/mps/index.html
318
318
319
319
[](){#ref-slurm-amdcpu}
320
-
## AMD CPU Nodes
320
+
## AMD CPU nodes
321
321
322
322
Alps has nodes with two AMD Epyc Rome CPU sockets per node for CPU-only workloads, most notably in the [Eiger][ref-cluster-eiger] cluster provided by the [HPC Platform][ref-platform-hpcp].
323
323
For a detailed description of the node hardware, see the [AMD Rome node][ref-alps-zen2-node] hardware documentation.
0 commit comments