Skip to content

Commit 8cebb2b

Browse files
committed
@RMeli's review
1 parent bf02e58 commit 8cebb2b

File tree

2 files changed

+49
-42
lines changed

2 files changed

+49
-42
lines changed

docs/alps/storage.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,17 @@ Project paths are organised as follows:
7272
$ id $USER
7373
uid=22008(bobsmith) gid=32819(g152) groups=32819(g152),33119(g174),32336(vasp6)
7474
```
75-
Here the user `bobsmith` is in three projects, with the project `g152` being their **primary project** (which can also be determined using the `id -gn $USER`).
75+
Here the user `bobsmith` is in three projects (`g152`, `g174` and `vasp6`), with the project `g152` being their **primary project**.
7676

77-
* They are also in the `vasp6` group, which users who have been granted access to the [VASP][ref-uenv-vasp] application.
77+
!!! example "How do I find my primary project?"
78+
In the terminal, use the following command to find your primary group:
79+
```console
80+
$ id -gn $USER
81+
g152
82+
```
7883

79-
!!! info "The `$PROJECT` environment variable"
80-
On some clusters, for example, [Eiger][ref-cluster-eiger] and [Eiger][ref-cluster-daint], the project folder for your primary project can be accessed using the `$PROJECT` environment variable.
84+
!!! info "The `$STORE` environment variable"
85+
On some clusters, for example, [Eiger][ref-cluster-eiger] and [Daint][ref-cluster-daint], the project folder for your primary project can be accessed using the `$STORE` environment variable.
8186

8287
[](){#ref-alps-iopsstor}
8388
## Iopsstor

docs/storage/filesystems.md

Lines changed: 40 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -104,9 +104,11 @@ The [cleanup policy][ref-storage-cleanup] is enforced on Scratch, to ensure cont
104104

105105
A [soft quota][ref-storage-quota-types] on is enforced on the Scratch file system, with a grace period to allow data transfer.
106106

107-
* 150 TB of disk space
108-
* 1 million inodes
109-
* grace period of two weeks
107+
Every user gets the following [quota][ref-storage-quota]:
108+
109+
* 150 TB of disk space;
110+
* 1 million inodes;
111+
* and a soft quota grace period of two weeks.
110112

111113
!!! important
112114
In order to prevent a degradation of the file system performance, please check your disk space and inode usage with the command [`quota`][ref-storage-quota-cli].
@@ -122,10 +124,7 @@ Please ensure that you move important data to a file system with backups, for ex
122124

123125
Store is a large, medium-performance, storage on the [Capstor][ref-alps-capstor] Lustre file system for sharing data within a project, and for medium term data storage.
124126

125-
Space on Store is allocated per-project, with a path created for each project:
126-
127-
* the capacity and inode limit is per-project, based on the initial resource request;
128-
* users have read and write access to the Store paths for each project that they are a member of.
127+
Space on Store is allocated per-project, with a path created for each project.
129128

130129
!!! info
131130
More information about how per-project paths are organised on Store is available on the [Capstor][ref-alps-capstor-store] documentation.
@@ -143,7 +142,7 @@ There is no [cleanup policy][ref-storage-cleanup] on Store, and the contents are
143142

144143
Space on Store is allocated per-project, with a path created for each project:
145144

146-
* the capacity and inode limit is per-project, based on the initial resource request;
145+
* the [quota][ref-storage-quota] limit is per-project, based on the initial resource request;
147146
* users have read and write access to the Store paths for each project that they are a member of.
148147

149148
!!! info
@@ -194,39 +193,42 @@ There are two types of quota:
194193
[](){#ref-storage-quota-cli}
195194
### Checking quota
196195

197-
You can check your storage quotas with the command quota on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
198-
199-
```console
200-
201-
$ quota
202-
checking your quota
203-
204-
Retrieving data ...
205-
206-
User: user
207-
Usage data updated on: 2025-05-21 11:10:02
208-
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
209-
| | User quota | Proj quota | User files | Proj files | |
210-
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
211-
| Directory | FS | Used | % | Grace | Used | % | Quota limit | Used | % | Grace | Used | % | Files limit |
212-
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
213-
| /iopsstor/scratch/cscs/user | lustre | 32.0G | - | - | - | - | - | 7746 | - | - | - | - | - |
214-
| /capstor/users/cscs/user | lustre | 3.2G | 6.4 | - | - | - | 50.0G | 14471 | 2.9 | - | - | - | 500000 |
215-
| /capstor/store/cscs/director2/g33 | lustre | 1.9T | 1.3 | - | - | - | 150.0T | 146254 | 14.6 | - | - | - | 1000000 |
216-
| /capstor/store/cscs/cscs/csstaff | 263.9T | 88.0 | - | - | 300.0T | 18216778 | 91.1 | - | - | 20000000 |
217-
| /capstor/scratch/cscs/user | lustre | 243.0G | 0.2 | - | - | - | 150.0T | 336479 | 33.6 | - | - | - | 1000000 |
218-
| /vast/users/cscs/user | vast | 11.7G | 23.3 | Unknown | - | - | 50.0G | 85014 | 17.0 | Unknown | - | - | 500000 |
219-
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
220-
```
221-
222-
The available capacity and used capacity is shown for each file system that you have access to.
196+
You can check your storage quotas with the command `quota` on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
197+
198+
The tool shows available capacity and used capacity for each file system that you have access to.
223199
If you are in multiple projects, information for the [Store][ref-storage-store] path for each project that you are a member of will be shown.
224-
In the example above, the user is in two projects, namely `g33` and `csstaff`.
200+
201+
??? example "Checking your quota on Ela"
202+
```console
203+
204+
$ quota
205+
checking your quota
206+
207+
Retrieving data ...
208+
209+
User: user
210+
Usage data updated on: 2025-05-21 11:10:02
211+
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
212+
| | User quota | Proj quota | User files | Proj files | |
213+
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
214+
| Directory | FS | Used | % | Grace | Used | % | Quota limit | Used | % | Grace | Used | % | Files limit |
215+
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
216+
| /iopsstor/scratch/cscs/user | lustre | 32.0G | - | - | - | - | - | 7746 | - | - | - | - | - |
217+
| /capstor/users/cscs/user | lustre | 3.2G | 6.4 | - | - | - | 50.0G | 14471 | 2.9 | - | - | - | 500000 |
218+
| /capstor/store/cscs/director2/g33 | lustre | 1.9T | 1.3 | - | - | - | 150.0T | 146254 | 14.6 | - | - | - | 1000000 |
219+
| /capstor/store/cscs/cscs/csstaff | 263.9T | 88.0 | - | - | 300.0T | 18216778 | 91.1 | - | - | 20000000 |
220+
| /capstor/scratch/cscs/user | lustre | 243.0G | 0.2 | - | - | - | 150.0T | 336479 | 33.6 | - | - | - | 1000000 |
221+
| /vast/users/cscs/user | vast | 11.7G | 23.3 | Unknown | - | - | 50.0G | 85014 | 17.0 | Unknown | - | - | 500000 |
222+
+------------------------------------+--------+--------+------+---------+--------+------+-------------+----------+------+----------+-----------+------+-------------+
223+
```
224+
225+
226+
Here the user is in two projects, namely `g33` and `csstaff`, for which the quota for their respective paths in `/capstor/store` are reported.
225227

226228
[](){#ref-storage-backup}
227229
## Backup
228230

229-
There are two methods for retaining backup copies of data on CSCS file systems -- [backups][ref-storage-backups] and [snapshots][ref-storage-backups] -- documented below.
231+
There are two methods for retaining backup copies of data on CSCS file systems, namely [backups][ref-storage-backups] and [snapshots][ref-storage-backups].
230232

231233
[](){#ref-storage-backups}
232234
### Backups
@@ -295,12 +297,12 @@ In addition to the automatic deletion of old files, if occupancy exceeds 60% the
295297
* **Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove unnecessary data.
296298
* **Occupancy ≥ 80%**: CSCS will start manually removing files and folders without further notice.
297299

298-
!!! info "How do I ensure that important data is not purged?"
300+
!!! info "How do I ensure that important data is not cleaned up?"
299301
File systems with cleanup, namely [Scratch][ref-storage-scratch], are not intended for long term storage.
300302
Copy the data to a file system designed for file storage that does not have a cleanup policy, for example [Store][ref-storage-store].
301303

302304
[](){#ref-storage-troubleshooting}
303-
## Common Questions
305+
## Frequently asked questions
304306

305307
??? question "My files are gone, but the directories are still there"
306308
When the [cleanup policy][ref-storage-cleanup] is applied on LUSTRE file systems, the files are removed, but the directories remain.

0 commit comments

Comments
 (0)