Skip to content

Commit 994bf38

Browse files
committed
spell check; add placeholders for FAQ docs
1 parent 62e4681 commit 994bf38

File tree

1 file changed

+17
-15
lines changed

1 file changed

+17
-15
lines changed

docs/storage/filesystems.md

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,6 @@
11
[](){#ref-storage-fs}
22
# File Systems
33

4-
!!! todo
5-
Spellcheck
6-
74
!!! note
85
The different file systems provided on the Alps platforms and policies like quotas and backups are documented here.
96
The file systems available on a [cluster][ref-alps-clusters] and the some policy details are determined by the [cluster][ref-alps-clusters]'s [platform][ref-alps-platforms].
@@ -41,7 +38,7 @@
4138

4239
- :fontawesome-solid-layer-group: __Quota__
4340

44-
Find out about limits to capacity and file counts, and how to your quota limits.
41+
Find out about quota on capacity and file counts, and how to check your quota limits.
4542

4643
[:octicons-arrow-right-24: Quota][ref-storage-quota]
4744

@@ -127,13 +124,13 @@ Store is a large, medium-performance, storage on the [capstor][ref-alps-capstor]
127124

128125
Space on Store is allocated per-project, with a path created for each project:
129126

130-
* the capacity and inode limit is per-project, based on the initial resource request.
127+
* the capacity and inode limit is per-project, based on the initial resource request;
131128
* users have read and write access to the store paths for each project that they are a member of.
132129

133130
!!! warning "Avoid using store for jobs"
134131
Store is tuned for storing results and shared datasets, specifically it has fewer meta data servers assigned to it.
135132

136-
Use the Scratch filesystems, which are tuned for fast parallel I/O, for storing input and output for jobs.
133+
Use the Scratch file systems, which are tuned for fast parallel I/O, for storing input and output for jobs.
137134

138135
!!! todo
139136
Low level information about `/capstor/store/cscs/<customer>/<group_id>` from [KB](https://confluence.cscs.ch/spaces/KB/pages/879142656/capstor+store) can be put into a folded admonition.
@@ -146,7 +143,7 @@ There is no [cleanup policy][ref-storage-cleanup] on store, and the contents of
146143

147144
Space on Store is allocated per-project, with a path is created for each project:
148145

149-
* the capacity and inode limit is per-project, based on the initial resource request.
146+
* the capacity and inode limit is per-project, based on the initial resource request;
150147
* users have read and write access to the store paths for each project that they are a member of.
151148

152149
!!! info
@@ -172,7 +169,7 @@ Storage quota is a limit on available storage, that is applied to:
172169
Excessive inode usage can overwhelm the metadata services, causing degradation across the file system.
173170

174171
!!! tip "Consider compressing paths to reduce inode usage"
175-
Consider archiving folders that you are not actively using with the tar command to reduce used capacity and the the number of inodes.
172+
Consider archiving folders that you are not actively using with the tar command to reduce used capacity and the number of inodes.
176173

177174
Consider compressing directories full of many small input files as SquashFS images (see the following example of generating [SquashFS images][ref-guides-storage-venv] for an example) - which pack many files into a single file that can be mounted to access the contents efficiently.
178175

@@ -188,7 +185,7 @@ There are two types of quota:
188185

189186
[](){#ref-storage-quota-types}
190187

191-
* **Soft quota** when exceeded there is a grace period for transfering or deleting files, before it will become a hard quota.
188+
* **Soft quota** when exceeded there is a grace period for transferring or deleting files, before it will become a hard quota.
192189
* **Hard quota** when exceeded no more files can be written.
193190

194191
!!! todo
@@ -197,7 +194,7 @@ There are two types of quota:
197194
[](){#ref-storage-quota-cli}
198195
### Checking quota
199196

200-
You can check your storage quotas with the command quota on the front-end system ela (`ela.cscs.ch`) and the login nodes of [daint][ref-cluster-daint], [santis][ref-cluster-santis], [clariden][ref-cluster-clariden] and [eiger][ref-cluster-eiger].
197+
You can check your storage quotas with the command quota on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
201198

202199
```console
203200
@@ -222,7 +219,7 @@ Usage data updated on: 2025-05-21 11:10:02
222219
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
223220
```
224221

225-
The available capacity and used capacity is show for each filesystem that you have access to.
222+
The available capacity and used capacity is show for each file system that you have access to.
226223
If you are in multiple projects, information for the [store][ref-storage-store] path for each project that you are a member of will be shown.
227224
In the example above, the user is in two projects, namely `g33` and `csstaff`.
228225

@@ -257,7 +254,7 @@ A snapshot is a full copy of a file system at a certain point in time, that can
257254

258255

259256
!!! note "Where are snapshots available?"
260-
Currently, only the [home][ref-storage-home] filesystem provides snapshots, with snapshots of the last 7 days available in the path `$HOME/.snapshot`.
257+
Currently, only the [home][ref-storage-home] file system provides snapshots, with snapshots of the last 7 days available in the path `$HOME/.snapshot`.
261258

262259
??? example "Accessing snapshots on home"
263260
The snapshots for [Home][ref-storage-home] are in the hidden `.snapshot` path in home (the path is not visible even to `ls -a`)
@@ -293,9 +290,9 @@ A daily process removes files that have not been **accessed (either read or writ
293290
2025-05-23 16:27:40.580767016 +0200
294291
```
295292

296-
In addition to the automatic deletion of old files, if occupancy exceeds 60% the following steps are taken to maintain performance of the filesystem:
293+
In addition to the automatic deletion of old files, if occupancy exceeds 60% the following steps are taken to maintain performance of the file system:
297294

298-
* **Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove uneccesary data.
295+
* **Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove unnecessary data.
299296
* **Occupancy ≥ 80%**: CSCS will start manually removing files and folders without further notice.
300297

301298
!!! info "How do I ensure that important data is not purged?"
@@ -309,4 +306,9 @@ In addition to the automatic deletion of old files, if occupancy exceeds 60% the
309306
When the [cleanup policy][ref-storage-cleanup] is applied on LUSTRE file systems, the files are removed, but the directories remain.
310307

311308
!!! todo
312-
review KB FAQ for storage questions
309+
FAQ question: [why did I run out of space](https://confluence.cscs.ch/spaces/KB/pages/278036496/Why+did+I+run+out+of+space+on+HOME)
310+
311+
!!! todo
312+
FAQ question: [writing with specific group access](https://confluence.cscs.ch/spaces/KB/pages/276955350/Writing+on+project+if+you+belong+to+more+than+one+group)
313+
314+

0 commit comments

Comments
 (0)