You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/storage/filesystems.md
+17-15Lines changed: 17 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,6 @@
1
1
[](){#ref-storage-fs}
2
2
# File Systems
3
3
4
-
!!! todo
5
-
Spellcheck
6
-
7
4
!!! note
8
5
The different file systems provided on the Alps platforms and policies like quotas and backups are documented here.
9
6
The file systems available on a [cluster][ref-alps-clusters] and the some policy details are determined by the [cluster][ref-alps-clusters]'s [platform][ref-alps-platforms].
@@ -41,7 +38,7 @@
41
38
42
39
- :fontawesome-solid-layer-group: __Quota__
43
40
44
-
Find out about limits to capacity and file counts, and how to your quota limits.
41
+
Find out about quota on capacity and file counts, and how to check your quota limits.
@@ -127,13 +124,13 @@ Store is a large, medium-performance, storage on the [capstor][ref-alps-capstor]
127
124
128
125
Space on Store is allocated per-project, with a path created for each project:
129
126
130
-
* the capacity and inode limit is per-project, based on the initial resource request.
127
+
* the capacity and inode limit is per-project, based on the initial resource request;
131
128
* users have read and write access to the store paths for each project that they are a member of.
132
129
133
130
!!! warning "Avoid using store for jobs"
134
131
Store is tuned for storing results and shared datasets, specifically it has fewer meta data servers assigned to it.
135
132
136
-
Use the Scratch filesystems, which are tuned for fast parallel I/O, for storing input and output for jobs.
133
+
Use the Scratch file systems, which are tuned for fast parallel I/O, for storing input and output for jobs.
137
134
138
135
!!! todo
139
136
Low level information about `/capstor/store/cscs/<customer>/<group_id>` from [KB](https://confluence.cscs.ch/spaces/KB/pages/879142656/capstor+store) can be put into a folded admonition.
@@ -146,7 +143,7 @@ There is no [cleanup policy][ref-storage-cleanup] on store, and the contents of
146
143
147
144
Space on Store is allocated per-project, with a path is created for each project:
148
145
149
-
* the capacity and inode limit is per-project, based on the initial resource request.
146
+
* the capacity and inode limit is per-project, based on the initial resource request;
150
147
* users have read and write access to the store paths for each project that they are a member of.
151
148
152
149
!!! info
@@ -172,7 +169,7 @@ Storage quota is a limit on available storage, that is applied to:
172
169
Excessive inode usage can overwhelm the metadata services, causing degradation across the file system.
173
170
174
171
!!! tip "Consider compressing paths to reduce inode usage"
175
-
Consider archiving folders that you are not actively using with the tar command to reduce used capacity and the the number of inodes.
172
+
Consider archiving folders that you are not actively using with the tar command to reduce used capacity and the number of inodes.
176
173
177
174
Consider compressing directories full of many small input files as SquashFS images (see the following example of generating [SquashFS images][ref-guides-storage-venv] for an example) - which pack many files into a single file that can be mounted to access the contents efficiently.
178
175
@@ -188,7 +185,7 @@ There are two types of quota:
188
185
189
186
[](){#ref-storage-quota-types}
190
187
191
-
***Soft quota** when exceeded there is a grace period for transfering or deleting files, before it will become a hard quota.
188
+
***Soft quota** when exceeded there is a grace period for transferring or deleting files, before it will become a hard quota.
192
189
***Hard quota** when exceeded no more files can be written.
193
190
194
191
!!! todo
@@ -197,7 +194,7 @@ There are two types of quota:
197
194
[](){#ref-storage-quota-cli}
198
195
### Checking quota
199
196
200
-
You can check your storage quotas with the command quota on the front-end system ela (`ela.cscs.ch`) and the login nodes of [daint][ref-cluster-daint], [santis][ref-cluster-santis], [clariden][ref-cluster-clariden] and [eiger][ref-cluster-eiger].
197
+
You can check your storage quotas with the command quota on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
The available capacity and used capacity is show for each filesystem that you have access to.
222
+
The available capacity and used capacity is show for each file system that you have access to.
226
223
If you are in multiple projects, information for the [store][ref-storage-store] path for each project that you are a member of will be shown.
227
224
In the example above, the user is in two projects, namely `g33` and `csstaff`.
228
225
@@ -257,7 +254,7 @@ A snapshot is a full copy of a file system at a certain point in time, that can
257
254
258
255
259
256
!!! note "Where are snapshots available?"
260
-
Currently, only the [home][ref-storage-home]filesystem provides snapshots, with snapshots of the last 7 days available in the path `$HOME/.snapshot`.
257
+
Currently, only the [home][ref-storage-home]file system provides snapshots, with snapshots of the last 7 days available in the path `$HOME/.snapshot`.
261
258
262
259
??? example "Accessing snapshots on home"
263
260
The snapshots for [Home][ref-storage-home] are in the hidden `.snapshot` path in home (the path is not visible even to `ls -a`)
@@ -293,9 +290,9 @@ A daily process removes files that have not been **accessed (either read or writ
293
290
2025-05-23 16:27:40.580767016 +0200
294
291
```
295
292
296
-
In addition to the automatic deletion of old files, if occupancy exceeds 60% the following steps are taken to maintain performance of the filesystem:
293
+
In addition to the automatic deletion of old files, if occupancy exceeds 60% the following steps are taken to maintain performance of the file system:
297
294
298
-
***Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove uneccesary data.
295
+
***Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove unnecessary data.
299
296
***Occupancy ≥ 80%**: CSCS will start manually removing files and folders without further notice.
300
297
301
298
!!! info "How do I ensure that important data is not purged?"
@@ -309,4 +306,9 @@ In addition to the automatic deletion of old files, if occupancy exceeds 60% the
309
306
When the [cleanup policy][ref-storage-cleanup] is applied on LUSTRE file systems, the files are removed, but the directories remain.
310
307
311
308
!!! todo
312
-
review KB FAQ for storage questions
309
+
FAQ question: [why did I run out of space](https://confluence.cscs.ch/spaces/KB/pages/278036496/Why+did+I+run+out+of+space+on+HOME)
310
+
311
+
!!! todo
312
+
FAQ question: [writing with specific group access](https://confluence.cscs.ch/spaces/KB/pages/276955350/Writing+on+project+if+you+belong+to+more+than+one+group)
0 commit comments