You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* update the storage/filesystem docs cover policies (backup, snapshots, cleanup, etc) and the different file system types (store, scratch, home)
* small tweaks to the alps/storage
* link these to the rest of the documentation (e.g. to the platform docs)
* how many meta data servers on Capstor and Iopsstor
26
+
* how these are distributed between store/scratch
27
+
28
+
Also discuss how Capstor and iopstor are used to provide both scratch / store / other file systems
29
+
30
+
The mounts, and how they are used for Scratch, Store, and Home file systems that are mounted on clusters are documented in the [file system docs][ref-storage-fs].
31
+
19
32
[](){#ref-alps-capstor}
20
-
## capstor
33
+
## Capstor
21
34
22
35
Capstor is the largest file system, for storing large amounts of input and output data.
23
-
It is used to provide SCRATCH and STORE for different clusters - the precise details are platform-specific.
36
+
It is used to provide [scratch][ref-storage-scratch] and [store][ref-storage-store].
37
+
38
+
!!! todo "add information about meta data services, and their distribution over scratch and store"
39
+
40
+
[](){#ref-alps-capstor-scratch}
41
+
### Scratch
42
+
43
+
All users on Alps get their own scratch path on Alps, `/capstor/scratch/cscs/$USER`.
44
+
45
+
[](){#ref-alps-capstor-store}
46
+
### Store
47
+
48
+
The [Store][ref-storage-store] mount point on Capstor provides stable storage with [backups][ref-storage-backups] and no [cleaning policy][ref-storage-cleanup].
49
+
It is mounted on clusters at the `/capstor/store` mount point, with folders created for each project.
24
50
25
51
[](){#ref-alps-iopsstor}
26
-
## iopsstor
52
+
## Iopsstor
27
53
28
54
!!! todo
29
-
small text explaining what iopsstor is designed to be used for.
55
+
small text explaining what Iopsstor is designed to be used for.
30
56
31
57
[](){#ref-alps-vast}
32
-
## vast
58
+
## VAST
33
59
34
-
The Vast storage is smaller capacity system that is designed for use as home folders.
60
+
The VAST storage is smaller capacity system that is designed for use as [Home][ref-storage-home] folders.
35
61
36
62
!!! todo
37
-
small text explaining what iopsstor is designed to be used for.
63
+
small text explaining what Iopsstor is designed to be used for.
38
64
39
-
The mounts, and how they are used for SCRATCH, STORE, PROJECT, HOME would be in the [storage docs][ref-storage-fs]
Copy file name to clipboardExpand all lines: docs/services/jupyterlab.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ The service is accessed at [jupyter-daint.cscs.ch](https://jupyter-daint.cscs.c
9
9
10
10
Once logged in, you will be redirected to the JupyterHub Spawner Options form, where typical job configuration options can be selected in order to allocate resources. These options might include the type and number of compute nodes, the wall time limit, and your project account.
11
11
12
-
Single-node notebooks are launched in a dedicated queue, minimizing queueing time. For these notebooks, servers should be up and running within a few minutes. The maximum waiting time for a server to be running is 5 minutes, after which the job will be cancelled and you will be redirected back to the spawner options page. If your single-node server is not spawned within 5 minutes we encourage you to [contact us](ref-get-in-touch).
12
+
Single-node notebooks are launched in a dedicated queue, minimizing queueing time. For these notebooks, servers should be up and running within a few minutes. The maximum waiting time for a server to be running is 5 minutes, after which the job will be cancelled and you will be redirected back to the spawner options page. If your single-node server is not spawned within 5 minutes we encourage you to [contact us][ref-get-in-touch].
13
13
14
14
When resources are granted the page redirects to the JupyterLab session, where you can browse, open and execute notebooks on the compute nodes. A new notebook with a Python 3 kernel can be created with the menu `new` and then `Python 3` . Under `new` it is also possible to create new text files and folders, as well as to open a terminal session on the allocated compute node.
0 commit comments