Skip to content

Commit 6c24cfb

Browse files
authored
add platform docs for CWp (#61)
* add santis docs; move storage into platform docs
1 parent f97fa33 commit 6c24cfb

File tree

4 files changed

+244
-51
lines changed

4 files changed

+244
-51
lines changed

docs/clusters/clariden.md

Lines changed: 2 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -16,45 +16,9 @@ The number of nodes can change when nodes are added or removed from other cluste
1616

1717
Most nodes are in the [`normal` slurm partition][ref-slurm-partition-normal], while a few nodes are in the [`debug` partition][ref-slurm-partition-debug].
1818

19-
### File Systems and Storage
19+
### Storage and file systems
2020

21-
There are three main file systems mounted on Clariden and Bristen.
22-
23-
| type |mount | filesystem |
24-
| -- | -- | -- |
25-
| Home | /users/$USER | [VAST][ref-alps-vast] |
26-
| Scratch | `/iopstor/scratch/cscs/$USER` | [Iopstor][ref-alps-iopstor] |
27-
| Project | `/capstor/store/cscs/swissai/<project>` | [Capstor][ref-alps-capstor] |
28-
29-
#### Home
30-
31-
Every user has a home path (`$HOME`) mounted at `/users/$USER` on the [VAST][ref-alps-vast] filesystem.
32-
The home directory has 50 GB of capacity, and is intended for configuration, small software packages and scripts.
33-
34-
#### Scratch
35-
36-
Scratch filesystems provide temporary storage for high-performance I/O for executing jobs.
37-
Use scratch to store datasets that will be accessed by jobs, and for job output.
38-
Scratch is per user - each user gets separate scratch path and quota.
39-
40-
* The environment variable `SCRATCH=/iopstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.
41-
42-
!!! warning "scratch cleanup policy"
43-
Files that have not been accessed in 30 days are automatically deleted.
44-
45-
**Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs.
46-
47-
!!! note
48-
There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not recommended for ML workloads for performance reasons.
49-
50-
### Project
51-
52-
Project storage is backed up, with no cleaning policy: it provides intermediate storage space for datasets, shared code or configuration scripts that need to be accessed from different vClusters.
53-
Project is per project - each project gets a project folder with project-specific quota.
54-
55-
* if you need additional storage, ask your PI to contact the CSCS service managers Fawzi or Nicholas.
56-
* hard limits on capacity and inodes prevent users from writing to project if the quota is reached - you can check quota and available space by running the [`quota`][ref-storage-quota] command on a login node or ela
57-
* it is not recommended to write directly to the project path from jobs.
21+
Clariden uses the [MLp filesystems and storage policies][ref-mlp-storage].
5822

5923
## Getting started
6024

docs/clusters/santis.md

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,127 @@
11
[](){#ref-cluster-santis}
22
# Santis
33

4+
Santis is an Alps cluster that provides GPU accelerators and file systems designed to meet the needs of climate and weather models for the [CWp][ref-platform-cwp].
5+
6+
## Cluster specification
7+
8+
### Compute nodes
9+
10+
Santis consists of around ??? [Grace-Hopper nodes][ref-alps-gh200-node].
11+
The number of nodes can change when nodes are added or removed from other clusters on Alps.
12+
13+
There are four login nodes, labelled `santis-ln00[1-4]`.
14+
You will be assigned to one of the four login nodes when you ssh onto the system, from where you can edit files, compile applications and start simulation jobs.
15+
16+
| node type | number of nodes | total CPU sockets | total GPUs |
17+
|-----------|--------| ----------------- | ---------- |
18+
| [gh200][ref-alps-gh200-node] | 1,200 | 4,800 | 4,800 |
19+
20+
### Storage and file systems
21+
22+
Santis uses the [CWp filesystems and storage policies][ref-cwp-storage].
23+
24+
## Getting started
25+
26+
### Logging into Santis
27+
28+
To connect to Santis via SSH, first refer to the [ssh guide][ref-ssh].
29+
30+
!!! example "`~/.ssh/config`"
31+
Add the following to your [SSH configuration][ref-ssh-config] to enable you to directly connect to santis using `ssh santis`.
32+
```
33+
Host santis
34+
HostName santis.alps.cscs.ch
35+
ProxyJump ela
36+
User cscsusername
37+
IdentityFile ~/.ssh/cscs-key
38+
IdentitiesOnly yes
39+
```
40+
41+
### Software
42+
43+
CSCS and the user community provide software environments tailored to [uenv][ref-uenv] are also available on Santis.
44+
45+
Currently, the following uenv are provided for the climate and weather community
46+
47+
* `icon/25.1`
48+
* `climana/25.1`
49+
50+
In adition to the climate and weather uenv, all of the
51+
52+
??? example "using uenv provided for other clusters"
53+
You can run uenv that were built for other Alps clusters using the `@` notation.
54+
For example, to use uenv images for [daint][ref-cluster-daint]:
55+
```bash
56+
# list all images available for daint
57+
uenv image find @daint
58+
59+
# download an image for daint
60+
uenv image pull namd/3.0:v3@daint
61+
62+
# start the uenv
63+
uenv start namd/3.0:v3@daint
64+
```
65+
66+
It is also possible to use HPC containers on Santis:
67+
68+
* Jobs using containers can be easily set up and submitted using the [container engine][ref-container-engine].
69+
* To build images, see the [guide to building container images on Alps][ref-build-containers].
70+
71+
72+
## Running jobs on Santis
73+
74+
### SLURM
75+
76+
Santis uses [SLURM][ref-slurm] as the workload manager, which is used to launch and monitor distributed workloads, such as training runs.
77+
78+
There are two slurm partitions on the system:
79+
80+
* the `normal` partition is for all production workloads.
81+
* the `debug` partition can be used to access a small allocation for up to 30 minutes for debugging and testing purposes.
82+
* the `xfer` partition is for [internal data transfer][ref-data-xfer-internal] at CSCS.
83+
84+
| name | nodes | max nodes per job | time limit |
85+
| -- | -- | -- | -- |
86+
| `normal` | 1266 | - | 24 hours |
87+
| `debug` | 32 | 2 | 30 minutes |
88+
| `xfer` | 2 | 1 | 24 hours |
89+
90+
* nodes in the `normal` and `debug` partitions are not shared
91+
* nodes in the `xfer` partition can be shared
92+
93+
See the SLURM documentation for instructions on how to run jobs on the [Grace-Hopper nodes][ref-slurm-gh200].
94+
95+
??? example "how to check the number of nodes on the system"
96+
You can check the size of the system by running the following command in the terminal:
97+
```terminal
98+
> sinfo --format "| %20R | %10D | %10s | %10l | %10A |"
99+
| PARTITION | NODES | JOB_SIZE | TIMELIMIT | NODES(A/I) |
100+
| debug | 32 | 1-2 | 30:00 | 3/29 |
101+
| normal | 1266 | 1-infinite | 1-00:00:00 | 812/371 |
102+
| xfer | 2 | 1 | 1-00:00:00 | 1/1 |
103+
```
104+
The last column shows the number of nodes that have been allocted in currently running jobs (`A`) and the number of jobs that are idle (`I`).
105+
106+
### FirecREST
107+
108+
Santis can also be accessed using [FircREST][ref-firecrest] at the `https://api.cscs.ch/ml/firecrest/v1` API endpoint.
109+
110+
## Maintenance and status
111+
112+
### Scheduled maintenance
113+
114+
Wednesday morning 8-12 CET is reserved for periodic updates, with services potentially unavailable during this timeframe. If the queues must be drained (redeployment of node images, rebooting of compute nodes, etc) then a Slurm reservation will be in place that will prevent jobs from running into the maintenance window.
115+
116+
Exceptional and non-disruptive updates may happen outside this time frame and will be announced to the users mailing list, and on the [CSCS status page](https://status.cscs.ch).
117+
118+
### Change log
119+
120+
!!! change "2025-03-05 container engine updated"
121+
now supports better containers that go faster. Users do not to change their workflow to take advantage of these updates.
122+
123+
??? change "2024-10-07 old event"
124+
this is an old update. Use `???` to automatically fold the update.
125+
126+
### Known issues
127+

docs/platforms/cwp/index.md

Lines changed: 71 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,74 @@
11
[](){#ref-platform-cwp}
2-
# Climate and Weather Platform
2+
# Climate and weather platform
3+
4+
The Machine Learning Platform (MLp) provides compute, storage and support to the climate and weather modeling community in Switzerland.
5+
6+
## Getting Started
7+
8+
### Getting access
9+
10+
Project administrators (PIs and deputy PIs) of projects on the MLp can to invite users to join their project, before they can use the project's resources on Alps.
311

412
!!! todo
5-
follow the template of the [MLp][ref-platform-mlp]
13+
This points to the Waldur solution - whether the [UMP][ref-account-ump] or [Waldur][ref-account-waldur] docs are linked depends on which is being used when these docs go live.
14+
15+
This is performed using the [project management tool][ref-account-waldur]
16+
17+
Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentification][ref-mfa] (MFA).
18+
19+
## Systems
20+
21+
Santis is the system deployed on the Alps infrastructure for the climate and weather platform.
22+
Its name derives from the highest mountain Säntis in the Alpstein massif of North-Eastern Switzerland.
23+
24+
<div class="grid cards" markdown>
25+
- :fontawesome-solid-mountain: [__Santis__][ref-cluster-santis]
26+
27+
Santis is a large [Grace-Hopper][ref-alps-gh200-node] cluster.
28+
</div>
29+
30+
[](){#ref-cwp-storage}
31+
## File systems and storage
32+
33+
There are three main file systems mounted on the CWp system Santis.
34+
35+
| type |mount | filesystem |
36+
| -- | -- | -- |
37+
| Home | /users/$USER | [VAST][ref-alps-vast] |
38+
| Scratch | `/capstor/scratch/cscs/$USER` | [Iopstor][ref-alps-capstor] |
39+
| Project | `/capstor/store/cscs/userlab/<project>` | [Capstor][ref-alps-capstor] |
40+
41+
### Home
42+
43+
Every user has a home path (`$HOME`) mounted at `/users/$USER` on the [VAST][ref-alps-vast] filesystem.
44+
The home directory has 50 GB of capacity, and is intended for configuration, small software packages and scripts.
45+
46+
### Scratch
47+
48+
The Scratch filesystem provides temporary storage for high-performance I/O for executing jobs.
49+
Use scratch to store datasets that will be accessed by jobs, and for job output.
50+
Scratch is per user - each user gets separate scratch path and quota.
51+
52+
!!! info
53+
A quota of 150 TB and 1 million inodes (files and folders) is applied to your scratch path.
54+
55+
These are implemented as soft quotas: upon reaching either limit there is a grace period of 1 week before write access to `$SCRATCH` is blocked.
56+
57+
You can check your quota at any time from Ela or one of the login nodes, using the [`quota` command][ref-storage-quota].
58+
59+
!!! info
60+
The environment variable `SCRATCH=/capstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.
61+
62+
!!! warning "scratch cleanup policy"
63+
Files that have not been accessed in 30 days are automatically deleted.
64+
65+
**Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs.
66+
67+
### Project
68+
69+
Project storage is backed up, with no cleaning policy: it provides intermediate storage space for datasets, shared code or configuration scripts that need to be accessed from different vClusters.
70+
Project is per project - each project gets a project folder with project-specific quota.
71+
72+
* hard limits on capacity and inodes prevent users from writing to project if the quota is reached - you can check quota and available space by running the [`quota`][ref-storage-quota] command on a login node or ela.
73+
* it is not recommended to write directly to the project path from jobs.
74+

docs/platforms/mlp/index.md

Lines changed: 47 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,9 @@
11
[](){#ref-platform-mlp}
2-
# Machine Learning Platform
2+
# Machine learning platform
33

4-
!!! todo
5-
A description of the MLP
6-
7-
* who are the users (help answer the question "is this the platform that I am on")
8-
* who are the partners (SwissAI, etc)
9-
* how to get apply to access MLp (if that is a thing)
4+
The Machine Learning Platform (MLp) provides compute, storage and expertise to the machine learning and AI community in Switzerlan, with the main user being the [Swiss AI Initiative](https://www.swiss-ai.org/).
105

11-
## Getting Started
6+
## Getting started
127

138
### Getting access
149

@@ -17,14 +12,14 @@ This is performed using the [project management tool][ref-account-waldur]
1712

1813
Once invited to a project, you will receive an email, which you can need to create an account and configure [multi-factor authentication][ref-mfa] (MFA).
1914

20-
## vClusters
15+
## Systems
2116

2217
The main cluster provided by the MLp is Clariden, a large Grace-Hopper GPU system on Alps.
2318

2419
<div class="grid cards" markdown>
2520
- :fontawesome-solid-mountain: [__Clariden__][ref-cluster-clariden]
2621

27-
Clariden is the main [Grace-Hopper][ref-alps-gh200-node] cluster used for **todo**
22+
Clariden is the main [Grace-Hopper][ref-alps-gh200-node] cluster.
2823
</div>
2924

3025
<div class="grid cards" markdown>
@@ -33,7 +28,48 @@ The main cluster provided by the MLp is Clariden, a large Grace-Hopper GPU syste
3328
Bristen is a smaller system with [A100 GPU nodes][ref-alps-a100-node] for **todo**
3429
</div>
3530

36-
## Guides and Tutorials
31+
[](){#ref-mlp-storage}
32+
## File Systems and Storage
33+
34+
There are three main file systems mounted on the MLp clusters Clariden and Bristen.
35+
36+
| type |mount | filesystem |
37+
| -- | -- | -- |
38+
| Home | /users/$USER | [VAST][ref-alps-vast] |
39+
| Scratch | `/iopstor/scratch/cscs/$USER` | [Iopstor][ref-alps-iopstor] |
40+
| Project | `/capstor/store/cscs/swissai/<project>` | [Capstor][ref-alps-capstor] |
41+
42+
### Home
43+
44+
Every user has a home path (`$HOME`) mounted at `/users/$USER` on the [VAST][ref-alps-vast] filesystem.
45+
The home directory has 50 GB of capacity, and is intended for configuration, small software packages and scripts.
46+
47+
### Scratch
48+
49+
Scratch filesystems provide temporary storage for high-performance I/O for executing jobs.
50+
Use scratch to store datasets that will be accessed by jobs, and for job output.
51+
Scratch is per user - each user gets separate scratch path and quota.
52+
53+
* The environment variable `SCRATCH=/iopstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.
54+
55+
!!! warning "scratch cleanup policy"
56+
Files that have not been accessed in 30 days are automatically deleted.
57+
58+
**Scratch is not intended for permanant storage**: transfer files back to the capstor project storage after job runs.
59+
60+
!!! note
61+
There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`, however this is not reccomended for ML workloads for performance reasons.
62+
63+
### Project
64+
65+
Project storage is backed up, with no cleaning policy: it provides intermediate storage space for datasets, shared code or configuration scripts that need to be accessed from different vClusters.
66+
Project is per project - each project gets a project folder with project-specific quota.
67+
68+
* if you need additional storage, ask your PI to contact the CSCS service managers Fawzi or Nicholas.
69+
* hard limits on capacity and inodes prevent users from writing to project if the quota is reached - you can check quota and available space by running the [`quota`][ref-storage-quota] command on a login node or ela
70+
* it is not recommended to write directly to the project path from jobs.
71+
72+
## Guides and tutorials
3773

3874
!!! todo
3975
links to tutorials and guides for ML workflows

0 commit comments

Comments
 (0)