You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Lustre uses *metadata* servers to store and query metadata which is basically what is shown by `ls`: directory structure, file permission, modification dates,..
125
+
This data is globally synchronized, which means that handling many small files is not especially suited for lustre, and the perfomrance of that part is similar on both Capstor and Iopsstor. The section below discusses [how to handle many small files][ref-guides-storage-small-files]
126
+
127
+
The data itself is subdivided in blocks of size `<blocksize>` and is stored by Object Storage Servers (OSS) in one or more Object Storage Targets (OST).
128
+
The blocksize and number of OSTs to use is defined by the striping settings. A new file or directory ihnerits them from its parent directory. The `lfs getstripe <path>` command can be used to get information on the actual stripe settings. For directories and empty files `lfs setstripe --stripe-count <count> --stripe-size <size> <directory/file>` can be used to set the layout. The simplest way to have the correct layout is to copy to a directory with the correct layout
129
+
130
+
A blocksize of 4MB gives good throughput, without being overly big, so it is a good choice when reading a file sequentially or in large chuncks, but if one reads shorter chuncks in random order it might be better to reduce the size, the performance will be smaller, but the performance of your application might actually increase.
Lustre also supports composite layouts, switching from one layout to another at a given size `--component-end` (`-E`).
139
+
With it it is possible to create a Progressive file layout switching `--stripe-count` (`-c`), `--stripe-size` (`-S`), so that fewer locks are required for smaller files, but load is distributed for larger files.
[Iopsstor][ref-alps-iopsstor] uses SSD as OST, thus random access is quick, and the performance of the single OST is high. [Capstor][ref-alps-capstor] on another hand uses harddisks, it has a larger capacity, and it also have many more OSS, thus the total bandwidth is larger.
149
+
150
+
!!! Note
151
+
ML model training normally has better performance if reading from iopsstor (random access, difficult to predict access pattern). Checkpoint can be done to capstor (very good for contiguous access).
152
+
153
+
[](){#ref-guides-storage-small-files}
116
154
## Many small files vs. HPC File Systems
117
155
118
156
Workloads that read or create many small files are not well-suited to parallel file systems, which are designed for parallel and distributed I/O.
119
157
158
+
In some cases, and if enough memory is available it might be worth to unpack/repack the small files to local in memory filesystems like `/dev/shmem/$USER` or `/tmp`, which are *much* faster, or to use a squashfs filesystem that is stored as a single large file on lustre.
159
+
120
160
Workloads that do not play nicely with Lustre include:
Copy file name to clipboardExpand all lines: docs/platforms/mlp/index.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,6 +63,7 @@ Scratch is per user - each user gets separate scratch path and quota.
63
63
The Capstor scratch filesystem is based on HDDs and is optimized for large, sequential read and write operations.
64
64
We recommend using Capstor for storing **checkpoint files** and other **large, contiguous outputs** generated by your training runs.
65
65
In contrast, Iopstor uses high-performance NVMe drives, which excel at handling **IOPS-intensive workloads** involving frequent, random access. This makes it a better choice for storing **training datasets**, especially when accessed randomly during machine learning training.
66
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best performance out of the filesystem.
Copy file name to clipboardExpand all lines: docs/storage/filesystems.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,6 +84,7 @@ Daily [snapshots][ref-storage-snapshots] for the last seven days are provided in
84
84
## Scratch
85
85
86
86
The Scratch file system is a fast workspace tuned for use by parallel jobs, with an emphasis on performance over reliability, hosted on the [Capstor][ref-alps-capstor] Lustre filesystem.
87
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best performance out of the filesystem.
87
88
88
89
All users on Alps get their own Scratch path, `/capstor/scratch/cscs/$USER`, which is pointed to by the variable `$SCRATCH` on the [HPC Platform][ref-platform-hpcp] and [Climate and Weather Platform][ref-platform-cwp] clusters Eiger, Daint and Santis.
89
90
@@ -123,6 +124,7 @@ Please ensure that you move important data to a file system with backups, for ex
123
124
## Store
124
125
125
126
Store is a large, medium-performance, storage on the [Capstor][ref-alps-capstor] Lustre file system for sharing data within a project, and for medium term data storage.
127
+
See the [Lustre guide][ref-guides-storage-lustre] for some hints on how to get the best preformance out of the filesystem.
126
128
127
129
Space on Store is allocated per-project, with a path created for each project.
128
130
To accomodate the different customers and projects on Alps, the project paths are organised as follows:
0 commit comments