You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here the user `bobsmith` is in three projects, with the project `g152` being their **primary project** (which can also be determined using the `id -gn $USER`).
75
+
Here the user `bobsmith` is in three projects (`g152`, `g174` and `vasp6`), with the project `g152` being their **primary project**.
76
76
77
-
* They are also in the `vasp6` group, which users who have been granted access to the [VASP][ref-uenv-vasp] application.
77
+
!!! example "How do I find my primary project?"
78
+
In the terminal, use the following command to find your primary group:
79
+
```console
80
+
$ id -gn $USER
81
+
g152
82
+
```
78
83
79
-
!!! info "The `$PROJECT` environment variable"
80
-
On some clusters, for example, [Eiger][ref-cluster-eiger] and [Eiger][ref-cluster-daint], the project folder for your primary project can be accessed using the `$PROJECT` environment variable.
84
+
!!! info "The `$STORE` environment variable"
85
+
On some clusters, for example, [Eiger][ref-cluster-eiger] and [Daint][ref-cluster-daint], the project folder for your primary project can be accessed using the `$STORE` environment variable.
Copy file name to clipboardExpand all lines: docs/storage/filesystems.md
+40-38Lines changed: 40 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,9 +104,11 @@ The [cleanup policy][ref-storage-cleanup] is enforced on Scratch, to ensure cont
104
104
105
105
A [soft quota][ref-storage-quota-types] on is enforced on the Scratch file system, with a grace period to allow data transfer.
106
106
107
-
* 150 TB of disk space
108
-
* 1 million inodes
109
-
* grace period of two weeks
107
+
Every user gets the following [quota][ref-storage-quota]:
108
+
109
+
* 150 TB of disk space;
110
+
* 1 million inodes;
111
+
* and a soft quota grace period of two weeks.
110
112
111
113
!!! important
112
114
In order to prevent a degradation of the file system performance, please check your disk space and inode usage with the command [`quota`][ref-storage-quota-cli].
@@ -122,10 +124,7 @@ Please ensure that you move important data to a file system with backups, for ex
122
124
123
125
Store is a large, medium-performance, storage on the [Capstor][ref-alps-capstor] Lustre file system for sharing data within a project, and for medium term data storage.
124
126
125
-
Space on Store is allocated per-project, with a path created for each project:
126
-
127
-
* the capacity and inode limit is per-project, based on the initial resource request;
128
-
* users have read and write access to the Store paths for each project that they are a member of.
127
+
Space on Store is allocated per-project, with a path created for each project.
129
128
130
129
!!! info
131
130
More information about how per-project paths are organised on Store is available on the [Capstor][ref-alps-capstor-store] documentation.
@@ -143,7 +142,7 @@ There is no [cleanup policy][ref-storage-cleanup] on Store, and the contents are
143
142
144
143
Space on Store is allocated per-project, with a path created for each project:
145
144
146
-
* the capacity and inode limit is per-project, based on the initial resource request;
145
+
* the [quota][ref-storage-quota] limit is per-project, based on the initial resource request;
147
146
* users have read and write access to the Store paths for each project that they are a member of.
148
147
149
148
!!! info
@@ -194,39 +193,42 @@ There are two types of quota:
194
193
[](){#ref-storage-quota-cli}
195
194
### Checking quota
196
195
197
-
You can check your storage quotas with the command quota on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
The available capacity and used capacity is shown for each file system that you have access to.
196
+
You can check your storage quotas with the command `quota` on the front-end system Ela (`ela.cscs.ch`) and the login nodes of [Daint][ref-cluster-daint], [Santis][ref-cluster-santis], [Clariden][ref-cluster-clariden] and [Eiger][ref-cluster-eiger].
197
+
198
+
The tool shows available capacity and used capacity for each file system that you have access to.
223
199
If you are in multiple projects, information for the [Store][ref-storage-store] path for each project that you are a member of will be shown.
224
-
In the example above, the user is in two projects, namely `g33` and `csstaff`.
Here the user is in two projects, namely `g33` and `csstaff`, for which the quota for their respective paths in `/capstor/store` are reported.
225
227
226
228
[](){#ref-storage-backup}
227
229
## Backup
228
230
229
-
There are two methods for retaining backup copies of data on CSCS file systems --[backups][ref-storage-backups] and [snapshots][ref-storage-backups] -- documented below.
231
+
There are two methods for retaining backup copies of data on CSCS file systems, namely[backups][ref-storage-backups] and [snapshots][ref-storage-backups].
230
232
231
233
[](){#ref-storage-backups}
232
234
### Backups
@@ -295,12 +297,12 @@ In addition to the automatic deletion of old files, if occupancy exceeds 60% the
295
297
***Occupancy ≥ 60%**: CSCS will ask users to take immediate action to remove unnecessary data.
296
298
***Occupancy ≥ 80%**: CSCS will start manually removing files and folders without further notice.
297
299
298
-
!!! info "How do I ensure that important data is not purged?"
300
+
!!! info "How do I ensure that important data is not cleaned up?"
299
301
File systems with cleanup, namely [Scratch][ref-storage-scratch], are not intended for long term storage.
300
302
Copy the data to a file system designed for file storage that does not have a cleanup policy, for example [Store][ref-storage-store].
301
303
302
304
[](){#ref-storage-troubleshooting}
303
-
## Common Questions
305
+
## Frequently asked questions
304
306
305
307
??? question "My files are gone, but the directories are still there"
306
308
When the [cleanup policy][ref-storage-cleanup] is applied on LUSTRE file systems, the files are removed, but the directories remain.
0 commit comments