Skip to content

Commit d8de5c5

Browse files
authored
Link Eiger docs to the Slurm docs (#171)
1 parent 0dc0bef commit d8de5c5

File tree

3 files changed

+15
-22
lines changed

3 files changed

+15
-22
lines changed

docs/clusters/daint.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ Daint is the main [HPC Platform][ref-platform-hpcp] cluster that provides comput
1010
Daint consists of around 800-1000 [Grace-Hopper nodes][ref-alps-gh200-node].
1111

1212
The number of nodes can vary as nodes are added or removed from other clusters on Alps.
13+
See the [Slurm documentation][ref-slurm-partitions-nodecount] for information on how to check the number of nodes.
1314

1415
There are four login nodes, `daint-ln00[1-4]`.
1516
You will be assigned to one of the four login nodes when you ssh onto the system, from where you can edit files, compile applications and launch batch jobs.
@@ -112,8 +113,6 @@ There are four [Slurm partitions][ref-slurm-partitions] on the system:
112113
* the `xfer` partition is for [internal data transfer][ref-data-xfer-internal].
113114
* the `low` partition is a low-priority partition, which may be enabled for specific projects at specific times.
114115

115-
116-
117116
| name | nodes | max nodes per job | time limit |
118117
| -- | -- | -- | -- |
119118
| `normal` | unlim | - | 24 hours |

docs/clusters/eiger.md

Lines changed: 12 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,10 @@
33

44
Eiger is an Alps cluster that provides compute nodes and file systems designed to meet the needs of CPU-only workloads for the [HPC Platform][ref-platform-hpcp].
55

6-
!!! under-construction
7-
This documentation is for the updated cluster `Eiger.Alps` reachable at `eiger.alps.cscs.ch`, that has replaced the former cluster as of June 30 2025.
8-
The previous [Eiger User Guide](https://confluence.cscs.ch/spaces/KB/pages/284426490/Alps+Eiger+User+Guide) is still available on the legacy Knowledge Base.
6+
!!! note
7+
This documentation is for the updated cluster `Eiger.Alps` reachable at `eiger.alps.cscs.ch`, that replaced the former cluster as on July 1 2025.
98

10-
!!! change "Important changes"
9+
??? change "Important changes from Eiger"
1110
The redeployment of `eiger.cscs.ch` as `eiger.alps.cscs.ch` has introduced changes that may affect some users.
1211

1312
### Breaking changes
@@ -29,10 +28,10 @@ Eiger is an Alps cluster that provides compute nodes and file systems designed t
2928

3029
### Unimplemented features
3130

32-
!!! under-construction "Jupyter and FirecREST is not yet available"
33-
[Jupyter and FirecREST][ref-firecrest] have not been configured on `Eiger.Alps`.
31+
!!! under-construction "Jupyter is not yet available"
32+
[Jupyter][ref-jupyter] has not yet been configured on `Eiger.Alps`.
3433

35-
**They will be deployed as soon as possible and this documentation will be updated accordingly**
34+
**It will be deployed as soon as possible and this documentation will be updated accordingly**
3635

3736
### Minor changes
3837

@@ -42,16 +41,10 @@ Eiger is an Alps cluster that provides compute nodes and file systems designed t
4241

4342
### Compute nodes
4443

45-
Eiger consists of multicore [AMD Epyc Rome][ref-alps-zen2-node] compute nodes: please note that the total number of available compute nodes on the system might vary over time, therefore you might want to check them with the Slurm command `sinfo -s`.
46-
```
47-
PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
48-
debug up 30:00 0/12/0/12 nid[002236-002247]
49-
xfer up 1-00:00:00 0/4/0/4 nid[002232-002235]
50-
prepost up 30:00 0/560/0/560 nid[001000-001023,001028-001031,001064-001127,001160-001191,001256-001267,001272-001287,001320-001447,001504-001539,001541-001543,001573-001599,001640-001767,001797-001799,001829-001831,002152-002231]
51-
normal* up 1-00:00:00 0/560/0/560 nid[001000-001023,001028-001031,001064-001127,001160-001191,001256-001267,001272-001287,001320-001447,001504-001539,001541-001543,001573-001599,001640-001767,001797-001799,001829-001831,002152-002231]
52-
low up 1-00:00:00 0/560/0/560 nid[001000-001023,001028-001031,001064-001127,001160-001191,001256-001267,001272-001287,001320-001447,001504-001539,001541-001543,001573-001599,001640-001767,001797-001799,001829-001831,002152-002231]
53-
```
54-
Additionally, there are four login nodes with hostnames `eiger-ln00[1-4]`: .
44+
Eiger consists of multicore [AMD Epyc Rome][ref-alps-zen2-node] compute nodes: please note that the total number of available compute nodes on the system might vary over time.
45+
See the [Slurm documentation][ref-slurm-partitions-nodecount] for information on how to check the number of nodes.
46+
47+
Additionally, there are four login nodes with hostnames `eiger-ln00[1-4]`.
5548

5649
### Storage and file systems
5750

@@ -168,9 +161,9 @@ See the Slurm documentation for instructions on how to run jobs on the [AMD CPU
168161
### Jupyter and FirecREST
169162

170163
!!! under-construction "FirecREST is not yet available"
171-
[Jupyter and FirecREST][ref-firecrest] have not been configured on `Eiger.Alps`.
164+
[Jupyter][ref-jupyter] has not yet been configured on `Eiger.Alps`.
172165

173-
**They will be deployed as soon as possible and this documentation will be updated accordingly**
166+
**It will be deployed as soon as possible and this documentation will be updated accordingly**
174167

175168
## Maintenance and status
176169

docs/running/slurm.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ $ sbatch --account=g123 ./job.sh
4343
```
4444

4545
!!! note
46-
The flags `--account` and `-Cmc` that were required on the old Eiger cluster are no longer required.
46+
The flags `--account` and `-Cmc` that were required on the old [Eiger][ref-cluster-eiger] cluster are no longer required.
4747

4848
## Prioritization and scheduling
4949

@@ -66,6 +66,7 @@ Each type of node has different resource constraints and capabilities, which Slu
6666
For example, CPU-only nodes may have configurations optimized for multi-threaded CPU workloads, while GPU nodes require additional parameters to allocate GPU resources efficiently.
6767
Slurm ensures that user jobs request and receive the appropriate resources while preventing conflicts or inefficient utilization.
6868

69+
[](){#ref-slurm-partitions-nodecount}
6970
!!! example "How to check the partitions and number of nodes therein?"
7071
You can check the size of the system by running the following command in the terminal:
7172
```console

0 commit comments

Comments
 (0)