Skip to content

Commit cb03664

Browse files
authored
Storage docs (#121)
* update the storage/filesystem docs cover policies (backup, snapshots, cleanup, etc) and the different file system types (store, scratch, home) * small tweaks to the alps/storage * link these to the rest of the documentation (e.g. to the platform docs)
1 parent 053cc38 commit cb03664

File tree

4 files changed

+324
-118
lines changed

4 files changed

+324
-118
lines changed

.github/CODEOWNERS

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,5 @@ docs/software/prgenv/linalg.md @finkandreas @msimberg
77
docs/software/sciapps/cp2k.md @abussy @RMeli
88
docs/software/sciapps/gromacs.md @kanduri
99
docs/software/ml @boeschf
10+
docs/storage @mpasserini
11+
docs/alps/storage.md @mpasserini

docs/alps/storage.md

Lines changed: 36 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
[](){#ref-alps-storage}
22
# Alps Storage
33

4+
!!! under-construction
5+
46
Alps has different storage attached, each with characteristics suited to different workloads and use cases.
57
HPC storage is managed in a separate cluster of nodes that host servers that manage the storage and the physical storage drives.
6-
These separate clusters are on the same Slingshot 11 network as the Alps.
8+
These separate storage clusters are on the same Slingshot 11 network as Alps.
79

8-
| | Capstor | Iopsstor | Vast |
10+
| | Capstor | Iopsstor | VAST |
911
|--------------|------------------------|------------------------|---------------------|
10-
| Model | HPE ClusterStor E1000D | HPE ClusterStor E1000F | Vast |
12+
| Model | HPE ClusterStor E1000D | HPE ClusterStor E1000F | VAST |
1113
| Type | Lustre | Lustre | NFS |
1214
| Capacity | 129 PB raw GridRAID | 7.2 PB raw RAID 10 | 1 PB |
1315
| Number of Drives | 8,480 16 TB HDD | 240 * 30 TB NVMe SSD | N/A |
@@ -16,25 +18,48 @@ These separate clusters are on the same Slingshot 11 network as the Alps.
1618
| IOPs | 1.5M | 8.6M read, 24M write | 200k read, 768k write |
1719
| file create/s| 374k | 214k | 97k |
1820

21+
22+
!!! todo
23+
Information about Lustre. Meta data servers, etc.
24+
25+
* how many meta data servers on Capstor and Iopsstor
26+
* how these are distributed between store/scratch
27+
28+
Also discuss how Capstor and iopstor are used to provide both scratch / store / other file systems
29+
30+
The mounts, and how they are used for Scratch, Store, and Home file systems that are mounted on clusters are documented in the [file system docs][ref-storage-fs].
31+
1932
[](){#ref-alps-capstor}
20-
## capstor
33+
## Capstor
2134

2235
Capstor is the largest file system, for storing large amounts of input and output data.
23-
It is used to provide SCRATCH and STORE for different clusters - the precise details are platform-specific.
36+
It is used to provide [scratch][ref-storage-scratch] and [store][ref-storage-store].
37+
38+
!!! todo "add information about meta data services, and their distribution over scratch and store"
39+
40+
[](){#ref-alps-capstor-scratch}
41+
### Scratch
42+
43+
All users on Alps get their own scratch path on Alps, `/capstor/scratch/cscs/$USER`.
44+
45+
[](){#ref-alps-capstor-store}
46+
### Store
47+
48+
The [Store][ref-storage-store] mount point on Capstor provides stable storage with [backups][ref-storage-backups] and no [cleaning policy][ref-storage-cleanup].
49+
It is mounted on clusters at the `/capstor/store` mount point, with folders created for each project.
2450

2551
[](){#ref-alps-iopsstor}
26-
## iopsstor
52+
## Iopsstor
2753

2854
!!! todo
29-
small text explaining what iopsstor is designed to be used for.
55+
small text explaining what Iopsstor is designed to be used for.
3056

3157
[](){#ref-alps-vast}
32-
## vast
58+
## VAST
3359

34-
The Vast storage is smaller capacity system that is designed for use as home folders.
60+
The VAST storage is smaller capacity system that is designed for use as [Home][ref-storage-home] folders.
3561

3662
!!! todo
37-
small text explaining what iopsstor is designed to be used for.
63+
small text explaining what Iopsstor is designed to be used for.
3864

39-
The mounts, and how they are used for SCRATCH, STORE, PROJECT, HOME would be in the [storage docs][ref-storage-fs]
4065

docs/services/jupyterlab.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ The service is accessed at [jupyter-daint.cscs.ch](https://jupyter-daint.cscs.c
99

1010
Once logged in, you will be redirected to the JupyterHub Spawner Options form, where typical job configuration options can be selected in order to allocate resources. These options might include the type and number of compute nodes, the wall time limit, and your project account.
1111

12-
Single-node notebooks are launched in a dedicated queue, minimizing queueing time. For these notebooks, servers should be up and running within a few minutes. The maximum waiting time for a server to be running is 5 minutes, after which the job will be cancelled and you will be redirected back to the spawner options page. If your single-node server is not spawned within 5 minutes we encourage you to [contact us](ref-get-in-touch).
12+
Single-node notebooks are launched in a dedicated queue, minimizing queueing time. For these notebooks, servers should be up and running within a few minutes. The maximum waiting time for a server to be running is 5 minutes, after which the job will be cancelled and you will be redirected back to the spawner options page. If your single-node server is not spawned within 5 minutes we encourage you to [contact us][ref-get-in-touch].
1313

1414
When resources are granted the page redirects to the JupyterLab session, where you can browse, open and execute notebooks on the compute nodes. A new notebook with a Python 3 kernel can be created with the menu `new` and then `Python 3` . Under `new` it is also possible to create new text files and folders, as well as to open a terminal session on the allocated compute node.
1515

0 commit comments

Comments
 (0)