You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/alps/storage.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,8 @@ HPC storage is provided by independent clusters, composed of servers and physica
19
19
20
20
Capstor and Iopsstor are on the same Slingshot network as Alps, while VAST is on the CSCS Ethernet network.
21
21
22
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best performance out of the filesystem.
23
+
22
24
The mounts, and how they are used for Scratch, Store, and Home file systems that are mounted on clusters are documented in the [file system docs][ref-storage-fs].
As shown in the schema above, Lustre uses *metadata* servers to store and query metadata, which is basically what is shown by `ls`: directory structure, file permission, and modification dates.
123
+
Its performance is roughly the same on [Capstor][ref-alps-capstor] and [Iopsstor][ref-alps-iopsstor].
124
+
This data is globally synchronized, which means Lustre is not well suited to handling many small files, see the discussion on [how to handle many small files][ref-guides-storage-small-files].
125
+
126
+
The data itself is subdivided in blocks of size `<blocksize>` and is stored by Object Storage Servers (OSS) in one or more Object Storage Targets (OST).
127
+
The blocksize and number of OSTs to use is defined by the striping settings, which are applied to a path, with new files and directories ihneriting them from their parent directory.
128
+
The `lfs getstripe <path>` command can be used to get information on the stripe settings of a path.
129
+
For directories and empty files `lfs setstripe --stripe-count <count> --stripe-size <size> <directory/file>` can be used to set the layout.
130
+
The simplest way to have the correct layout is to copy to a directory with the correct layout
131
+
132
+
!!! tip "A blocksize of 4MB gives good throughput, without being overly big..."
133
+
... so it is a good choice when reading a file sequentially or in large chunks, but if one reads shorter chunks in random order it might be better to reduce the size, the performance will be smaller, but the performance of your application might actually increase.
134
+
See the [Lustre documentation](https://doc.lustre.org/lustre_manual.xhtml#managingstripingfreespace) for more information.
Lustre also supports composite layouts, switching from one layout to another at a given size `--component-end` (`-E`).
142
+
With it it is possible to create a Progressive file layout switching `--stripe-count` (`-c`), `--stripe-size` (`-S`), so that fewer locks are required for smaller files, but load is distributed for larger files.
[Iopsstor][ref-alps-iopsstor] uses SSD as OST, thus random access is quick, and the performance of the single OST is high.
152
+
[Capstor][ref-alps-capstor] on another hand uses harddisks, it has a larger capacity, and it also have many more OSS, thus the total bandwidth is larger.
153
+
See for example the [ML filesystem guide][ref-mlp-storage-suitability].
154
+
155
+
[](){#ref-guides-storage-small-files}
116
156
## Many small files vs. HPC File Systems
117
157
118
158
Workloads that read or create many small files are not well-suited to parallel file systems, which are designed for parallel and distributed I/O.
119
159
160
+
In some cases, and if enough memory is available it might be worth to unpack/repack the small files to in-memory filesystems like `/dev/shm/$USER` or `/tmp`, which are *much* faster, or to use a squashfs filesystem that is stored as a single large file on Lustre.
161
+
120
162
Workloads that do not play nicely with Lustre include:
Copy file name to clipboardExpand all lines: docs/platforms/mlp/index.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,17 +52,19 @@ Use scratch to store datasets that will be accessed by jobs, and for job output.
52
52
Scratch is per user - each user gets separate scratch path and quota.
53
53
54
54
* The environment variable `SCRATCH=/iopsstor/scratch/cscs/$USER` is set automatically when you log into the system, and can be used as a shortcut to access scratch.
55
-
* There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`.
55
+
* There is an additional scratch path mounted on [Capstor][ref-alps-capstor] at `/capstor/scratch/cscs/$USER`.
56
56
57
57
!!! warning "scratch cleanup policy"
58
58
Files that have not been accessed in 30 days are automatically deleted.
59
59
60
60
**Scratch is not intended for permanent storage**: transfer files back to the capstor project storage after job runs.
61
61
62
+
[](){#ref-mlp-storage-suitability}
62
63
!!! note "file system suitability"
63
64
The Capstor scratch filesystem is based on HDDs and is optimized for large, sequential read and write operations.
64
65
We recommend using Capstor for storing **checkpoint files** and other **large, contiguous outputs** generated by your training runs.
65
66
In contrast, Iopstor uses high-performance NVMe drives, which excel at handling **IOPS-intensive workloads** involving frequent, random access. This makes it a better choice for storing **training datasets**, especially when accessed randomly during machine learning training.
67
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best performance out of the filesystem.
Copy file name to clipboardExpand all lines: docs/storage/filesystems.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,6 +84,7 @@ Daily [snapshots][ref-storage-snapshots] for the last seven days are provided in
84
84
## Scratch
85
85
86
86
The Scratch file system is a fast workspace tuned for use by parallel jobs, with an emphasis on performance over reliability, hosted on the [Capstor][ref-alps-capstor] Lustre filesystem.
87
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best performance out of the filesystem.
87
88
88
89
All users on Alps get their own Scratch path, `/capstor/scratch/cscs/$USER`, which is pointed to by the variable `$SCRATCH` on the [HPC Platform][ref-platform-hpcp] and [Climate and Weather Platform][ref-platform-cwp] clusters Eiger, Daint and Santis.
89
90
@@ -123,6 +124,7 @@ Please ensure that you move important data to a file system with backups, for ex
123
124
## Store
124
125
125
126
Store is a large, medium-performance, storage on the [Capstor][ref-alps-capstor] Lustre file system for sharing data within a project, and for medium term data storage.
127
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best preformance out of the filesystem.
126
128
127
129
Space on Store is allocated per-project, with a path created for each project.
128
130
To accomodate the different customers and projects on Alps, the project paths are organised as follows:
0 commit comments