Skip to content

Commit dadaae5

Browse files
authored
Merge branch 'main' into VCUE-706_polish-ce-doc
2 parents c4f78ac + 913db61 commit dadaae5

File tree

6 files changed

+32
-36
lines changed

6 files changed

+32
-36
lines changed

docs/alps/storage.md

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,8 @@
33

44
!!! under-construction
55

6-
Alps has different storage attached, each with characteristics suited to different workloads and use cases.
7-
HPC storage is managed in a separate cluster of nodes that host servers that manage the storage and the physical storage drives.
8-
These separate storage clusters are on the same Slingshot 11 network as Alps.
6+
The Alps infrastructure offers multiple storage solutions, each with characteristics suited to different workloads and use cases.
7+
HPC storage is provided by independent clusters, composed of servers and physical storage drives.
98

109
| | Capstor | Iopsstor | VAST |
1110
|--------------|------------------------|------------------------|---------------------|
@@ -18,29 +17,25 @@ These separate storage clusters are on the same Slingshot 11 network as Alps.
1817
| IOPs | 1.5M | 8.6M read, 24M write | 200k read, 768k write |
1918
| file create/s| 374k | 214k | 97k |
2019

21-
22-
!!! todo
23-
Information about Lustre. Meta data servers, etc.
24-
25-
* how many meta data servers on Capstor and Iopsstor
26-
* how these are distributed between store/scratch
27-
28-
Also discuss how Capstor and iopstor are used to provide both scratch / store / other file systems
20+
Capstor and Iopsstor are on the same Slingshot network as Alps, while VAST is on the CSCS Ethernet network.
2921

3022
The mounts, and how they are used for Scratch, Store, and Home file systems that are mounted on clusters are documented in the [file system docs][ref-storage-fs].
3123

3224
[](){#ref-alps-capstor}
3325
## Capstor
3426

35-
Capstor is the largest file system, for storing large amounts of input and output data.
27+
Capstor is the largest file system, and it is meant for storing large amounts of input and output data.
3628
It is used to provide [scratch][ref-storage-scratch] and [store][ref-storage-store].
3729

38-
!!! todo "add information about meta data services, and their distribution over scratch and store"
30+
Capstor has 80 Object Storage Servers ([OSS](https://wiki.lustre.org/Lustre_Object_Storage_Service_(OSS))), and 6 Metadata Servers ([MDS](https://wiki.lustre.org/Lustre_Metadata_Service_(MDS))).
31+
Two of of these Metadata servers are dedicated for Store, and the remaining four are dedicated for Scratch.
3932

4033
[](){#ref-alps-capstor-scratch}
4134
### Scratch
4235

4336
All users on Alps get their own scratch path on Alps, `/capstor/scratch/cscs/$USER`.
37+
Since Capstor OSSs are made of HDDs, Capstor is a storage well suited for jobs which perform large sequential and parallel read/write operations.
38+
See the [Scratch documentation][ref-storage-scratch] for more information.
4439

4540
[](){#ref-alps-capstor-store}
4641
### Store
@@ -51,15 +46,18 @@ It is mounted on clusters at the `/capstor/store` mount point, with folders crea
5146
[](){#ref-alps-iopsstor}
5247
## Iopsstor
5348

54-
!!! todo
55-
small text explaining what Iopsstor is designed to be used for.
49+
Iopsstor is a smaller filesystem compared to Capstor, but it leverages high-performance NVMe drives, which offer significantly better speed and responsiveness than traditional HDDs.
50+
It is primarily used as a scratch space, and it is optimized for IOPS-intensive workloads.
51+
This makes it particularly well-suited for applications that involve frequent, random read and write operations within files.
52+
53+
Iopsstor has has 20 OSSs, and 2 MDSs.
5654

5755
[](){#ref-alps-vast}
5856
## VAST
5957

6058
The VAST storage is smaller capacity system that is designed for use as [Home][ref-storage-home] folders.
6159

6260
!!! todo
63-
small text explaining what Iopsstor is designed to be used for.
61+
small text explaining what VAST is designed to be used for.
6462

6563

docs/software/uenv/build.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,8 @@ uenv build <recipe> <label>
2626
uenv build $SCRATCH/recipes/myapp myapp/v3@daint%gh200
2727
```
2828

29-
The output of the above command will print a url that links to a status page, for you to follow the progress of the build.
30-
After a successful build, the uenv can be pulled using an address from the status page:
29+
The output of the above command will print a URL that links to a status page where you can follow the progress of the build.
30+
After a successful build, the uenv can be pulled using a name from the status page:
3131

3232
```bash
3333
uenv image pull service::myapp/v3:1669479716

docs/software/uenv/configure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,5 +56,5 @@ The default repo location for downloaded uenv images.
5656
The repo is selected according to the following process:
5757

5858
* if the `--repo` CLI arguement overrides
59-
* else if `color` is set in the config file, use that setting
59+
* else if `repo` is set in the config file, use that setting
6060
* else use the default value of `$SCRATCH/.uenv-images`

docs/software/uenv/deploy.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
# Deploying uenv
33

44
[](){#ref-uenv-deploy-versions}
5-
## Versioning and Labeling
5+
## Versioning and labeling
66

77
Uenv images have a **label** of the following form:
88

@@ -57,7 +57,7 @@ The node type (microarchitecture) that the uenv is built for.
5757

5858
## Registries
5959

60-
The following naming scheme is employed in the OCI container artifactory for uenv images:
60+
The following naming scheme is employed in the OCI container registry for uenv images:
6161

6262
```text
6363
namespace/system/uarch/name/version:tag
@@ -92,11 +92,11 @@ Specific uenv recipes are stored in `recipes/name/version/uarch/`.
9292

9393
The `cluster` is specified when building and deploying the uenv, while the `tag` is specified when deploying the uenv.
9494

95-
## uenv Deployment
95+
## uenv deployment
9696

97-
### Deployment Rules
97+
### Deployment rules
9898

99-
A recipe can be built for deployment on different vClusters, and for multiple targets.
99+
A recipe can be built for deployment on different clusters, and for multiple targets.
100100
For example:
101101

102102
* A multicore recipe could be built for `zen2` or `zen3` nodes
@@ -154,15 +154,15 @@ uenv image copy build::<SOURCE> deploy::<DESTINATION> # (1)!
154154

155155
1. `<DESTINATION>` must be fully qualified.
156156

157-
!!! example "Deploy Using Image ID"
157+
!!! example "Deploy using image ID"
158158

159159
Deploy a uenv from `build::` using the ID of the image:
160160

161161
```bash
162162
uenv image copy build::d2afc254383cef20 deploy::prgenv-nvfortran/24.11:v1@daint%gh200
163163
```
164164

165-
!!! example "Deploy Using Qualified Name"
165+
!!! example "Deploy using qualified name"
166166

167167
Deploy a uenv using the qualified name:
168168

@@ -174,7 +174,7 @@ uenv image copy build::<SOURCE> deploy::<DESTINATION> # (1)!
174174

175175
The build image uses the CI/CD pipeline ID as the tag. You will need to choose an appropriate tag.
176176

177-
!!! example "Deploy a uenv from One vCluster to Another"
177+
!!! example "Deploy a uenv from one cluster to another"
178178

179179
You can also deploy a uenv from one vCluster to another.
180180
For example, if the `uenv` for `prgenv-gnu` has been deployed on `daint`,

docs/software/uenv/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -52,14 +52,14 @@ $ uenv --version
5252

5353
On Alps clusters the most recent version 8.1.0 is installed.
5454

55-
??? warning "out of date uenv version on Eiger and Balfrin"
55+
??? warning "Out of date uenv version on Eiger and Balfrin"
5656

5757
The uenv tool available on Eiger and Balfrin is a different version than the one described below, and some commands will be different to those documented here.
5858

5959
!!! note
6060
This note only applies to the current `eigen.cscs.ch` deployment.
6161

62-
The new [`eigen.alps.cscs.ch`][ref-cluster-eiger] deployment has version 8.1.0 of uenv installed.
62+
The new [`eiger.alps.cscs.ch`][ref-cluster-eiger] deployment has version 8.1.0 of uenv installed.
6363

6464
Please refer to `uenv --help` for the correct usage on these systems.
6565

@@ -98,7 +98,7 @@ The available uenv images are stored in a registry, that can be queried using th
9898
The output above lists all of the uenv that are available on the current system ([Eiger][ref-cluster-eiger] in this case).
9999
The search can be refined by providing a [label][ref-uenv-labels].
100100

101-
??? example "using labels to refine search"
101+
??? example "Using labels to refine search"
102102
```bash
103103
# find all uenv with name prgenv-gnu
104104
uenv image find prgenv-gnu
@@ -117,7 +117,7 @@ The search can be refined by providing a [label][ref-uenv-labels].
117117
```
118118

119119
!!! info
120-
All uenv commands that take a [label][ref-uenv-labels] as an arguement use the same flexible syntax [label descriptions][ref-uenv-labels-examples].
120+
All uenv commands that take a [label][ref-uenv-labels] as an argument use the same flexible syntax [label descriptions][ref-uenv-labels-examples].
121121

122122
## Downloading uenv
123123

@@ -527,7 +527,7 @@ echo "unset -f uenv" >> $HOME/.bashrc
527527
Before uenv can be used, you need to log out then back in again and type `which uenv` to verify that uenv has been installed in your `$HOME` path.
528528

529529
[](){#ref-uenv-labels}
530-
## uenv Labels
530+
## uenv labels
531531

532532
Uenv are referred to using **labels**, where a label has the following form
533533

docs/storage/filesystems.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010

1111
- :fontawesome-solid-hard-drive: __File Systems__
1212

13-
There are three *types* of file system that provided on Alps clusters:
13+
There are three *types* of file system that are provided on Alps clusters:
1414

1515
| | [backups][ref-storage-backups] | [snapshot][ref-storage-snapshots] | [cleanup][ref-storage-cleanup] | access |
1616
| --------- | ---------- | ---------- | ----------- | --------- |
@@ -225,8 +225,6 @@ If you are in multiple projects, information for the [Store][ref-storage-store]
225225
```console
226226
227227
$ quota
228-
checking your quota
229-
230228
Retrieving data ...
231229

232230
User: user
@@ -332,7 +330,7 @@ In addition to the automatic deletion of old files, if occupancy exceeds 60% the
332330
## Frequently asked questions
333331

334332
??? question "My files are gone, but the directories are still there"
335-
When the [cleanup policy][ref-storage-cleanup] is applied on LUSTRE file systems, the files are removed, but the directories remain.
333+
When the [cleanup policy][ref-storage-cleanup] is applied on Lustre file systems, the files are removed, but the directories remain.
336334

337335
??? question "What do messages like `mkdir: cannot create directory 'test': Disk quota exceeded` mean?"
338336
You have run out of quota on the target file system.

0 commit comments

Comments
 (0)