You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use smarty markdown plugin for proper em-dashes etc. (#258)
See https://python-markdown.github.io/extensions/smarty/ for details. As
far as I understand, specific replacements can be disabled if needed.
Double and triple hyphens in code blocks are of course not changed.
Copy file name to clipboardExpand all lines: docs/platforms/hpcp/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ The Store (or Project) file system is provided as a space to store datasets, cod
67
67
The environment variable `$STORE` can be used as a shortcut to access the Store folder of your primary project.
68
68
69
69
Hard limits on the amount of data and number of files (inodes) will prevent you from writing to [Store][ref-storage-store] if your quotas are exceeded.
70
-
You can check how much data and inodes you are consuming -- and their respective quotas -- by running the [`quota`][ref-storage-quota] command on a login node.
70
+
You can check how much data and inodes you are consuming---and their respective quotas---by running the [`quota`][ref-storage-quota] command on a login node.
71
71
72
72
!!! warning
73
73
It is not recommended to write directly to the `$STORE` path from batch jobs.
Copy file name to clipboardExpand all lines: docs/running/slurm.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,7 +119,7 @@ The following sections will document how to use Slurm on different compute nodes
119
119
To demonstrate the effects different Slurm parameters, we will use a little command line tool [affinity](https://github.com/bcumming/affinity) that prints the CPU cores and GPUs that are assigned to each MPI rank in a job, and which node they are run on.
120
120
121
121
We strongly recommend using a tool like affinity to understand and test the Slurm configuration for jobs, because the behavior of Slurm is highly dependent on the system configuration.
122
-
Parameters that worked on a different cluster -- or with a different Slurm version or configuration on the same cluster -- are not guaranteed to give the same results.
122
+
Parameters that worked on a different cluster---or with a different Slurm version or configuration on the same cluster---are not guaranteed to give the same results.
123
123
124
124
It is straightforward to build the affinity tool to experiment with Slurm configurations.
In the above examples all threads on each -- we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
491
+
In the above examples all threads on each---we are effectively allowing the OS to schedule the threads on the available set of cores as it sees fit.
492
492
This often gives the best performance, however sometimes it is beneficial to bind threads to explicit cores.
493
493
494
494
The OpenMP threading runtime provides additional options for controlling the pinning of threads to the cores assigned to each MPI rank.
@@ -599,7 +599,7 @@ First ensure that *all* resources are allocated to the whole job with the follow
599
599
Next, launch your applications using `srun`, carefully subdividing resources for each job step.
600
600
The `--exclusive` flag must be used again, but note that its meaning differs in the context of `srun`.
601
601
Here, `--exclusive` ensures that only the resources explicitly requested for a given job step are reserved and allocated to it.
602
-
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset -- effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
602
+
Without this flag, Slurm reserves all resources for the job step, even if it only allocates a subset---effectively blocking further parallel `srun` invocations from accessing unrequested but needed resources.
603
603
604
604
Be sure to background each `srun` command with `&`, so that subsequent job steps start immediately without waiting for previous ones to finish.
605
605
A final `wait` command ensures that your submission script does not exit until all job steps complete.
Copy file name to clipboardExpand all lines: docs/software/cw/wrf.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ The [`prgenv-gnu`][ref-uenv-prgenv-gnu] uenv is suitable for building WRF.
35
35
```
36
36
uenv start prgenv-gnu/24.11:v2 --view=spack
37
37
```
38
-
In this example we use the latest version of `prgenv-gnu` on Eiger at the time of writing -- check the `prgenv-gnu`[guide][ref-uenv-prgenv-gnu] for the latest version.
38
+
In this example we use the latest version of `prgenv-gnu` on Eiger at the time of writing---check the `prgenv-gnu`[guide][ref-uenv-prgenv-gnu] for the latest version.
39
39
40
40
```bash
41
41
# build the latest version provided by the version of Spack used by prgenv-gnu
0 commit comments