Skip to content

Commit fae20b1

Browse files
authored
Merge pull request #1052 from hpcugent/herelinks
Replace "here"-links
2 parents 3c71dbb + 151d666 commit fae20b1

File tree

12 files changed

+37
-24
lines changed

12 files changed

+37
-24
lines changed

mkdocs/docs/HPC/FAQ.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -319,8 +319,8 @@ Please send an e-mail to {{hpcinfo}} that includes:
319319

320320
{% endif %}
321321

322-
If the software is a Python package, you can manually install it in a virtual environment.
323-
More information can be found [here](./setting_up_python_virtual_environments.md).
322+
If the software is a Python package, you can manually
323+
[install it in a virtual environment](./setting_up_python_virtual_environments.md).
324324
Note that it is still preferred to submit a software installation request,
325325
as the software installed by the HPC team will be optimized for the HPC environment.
326326
This can lead to dramatic performance improvements.

mkdocs/docs/HPC/alphafold.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ It is therefore recommended to first familiarize yourself with AlphaFold. The fo
2020
- VSC webpage about AlphaFold: <https://www.vscentrum.be/alphafold>
2121
- Introductory course on AlphaFold by VIB: <https://elearning.vib.be/courses/alphafold>
2222
- "Getting Started with AlphaFold" presentation by Kenneth Hoste (HPC-UGent)
23-
- recording available [on YouTube](https://www.youtube.com/watch?v=jP9Qg1yBGcs)
24-
- slides available [here (PDF)](https://www.vscentrum.be/_files/ugd/5446c2_f19a8723f7f7460ebe990c28a53e56a2.pdf?index=true)
23+
- [recording available](https://www.youtube.com/watch?v=jP9Qg1yBGcs) (on YouTube)
24+
- [slides available](https://www.vscentrum.be/_files/ugd/5446c2_f19a8723f7f7460ebe990c28a53e56a2.pdf?index=true) (PDF)
2525
- see also <https://www.vscentrum.be/alphafold>
2626

2727

@@ -130,8 +130,8 @@ Likewise for `jackhmmer`, the core count can be controlled via `$ALPHAFOLD_JACKH
130130

131131
### CPU/GPU comparison
132132

133-
The provided timings were obtained by executing the `T1050.fasta` example, as outlined in the Alphafold [README]({{readme}}).
134-
For the corresponding jobscripts, they are available [here](./example-jobscripts).
133+
The provided timings were obtained by executing the `T1050.fasta` example, as outlined in the [Alphafold README]({{readme}}).
134+
The [corresponding jobscripts](#example-jobscripts) are available.
135135

136136
Using `--db_preset=full_dbs`, the following runtime data was collected:
137137

mkdocs/docs/HPC/getting_started.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,8 +79,12 @@ Make sure you can get to a shell access to the {{hpcinfra}} before proceeding wi
7979

8080
Now that you can login, it is time to transfer files from your local computer to your **home directory** on the {{hpcinfra}}.
8181

82-
Download [tensorflow_mnist.py](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/tensorflow_mnist.py)
83-
and [run.sh](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/run.sh) example scripts to your computer (from [here](https://github.com/hpcugent/vsc_user_docs/tree/main/{{exampleloc}})).
82+
Download following the example scripts to your computer:
83+
84+
- [tensorflow_mnist.py](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/tensorflow_mnist.py)
85+
- [run.sh](https://raw.githubusercontent.com/hpcugent/vsc_user_docs/main/{{exampleloc}}/run.sh)
86+
87+
You can also find the example scripts in our git repo: [https://github.com/hpcugent/vsc_user_docs/](https://github.com/hpcugent/vsc_user_docs/tree/main/mkdocs/docs/HPC/examples/Getting_Started/tensorflow_mnist).
8488

8589
{%- if OS == windows %}
8690

mkdocs/docs/HPC/infrastructure.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ Science and Innovation (EWI).
1313
Log in to the HPC-UGent Tier-2 infrastructure via [https://login.hpc.ugent.be](https://login.hpc.ugent.be)
1414
or using SSH via `login.hpc.ugent.be`.
1515

16-
More info on using the web portal you can find [here](web_portal.md),
17-
and about connection with SSH [here](connecting.md).
16+
Read more info on [using the web portal](web_portal.md),
17+
and [about making a connection with SSH](connecting.md).
1818

1919
## Tier-2 compute clusters
2020

mkdocs/docs/HPC/jupyter.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,9 @@ $ module load SciPy-bundle/2023.11-gfbf-2023b
8989
```
9090
This throws no errors, since this module uses a toolchain that is compatible with the toolchain used by the notebook
9191
92-
If we use a different SciPy module that uses an incompatible toolchain, we will get a module load conflict when trying to load it (For more info on these errors, see [here](troubleshooting.md#module-conflicts)).
92+
If we use a different SciPy module that uses an incompatible toolchain,
93+
we will get a module load conflict when trying to load it
94+
(for more info on these errors, consult the [troubleshooting page](troubleshooting.md#module-conflicts)).
9395
9496
```shell
9597
$ module load JupyterNotebook/7.2.0-GCCcore-13.2.0

mkdocs/docs/HPC/multi_core_jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ MPI.
4747

4848
!!! warning
4949
Just requesting more nodes and/or cores does not mean that your job will automatically run faster.
50-
You can find more about this [here](troubleshooting.md#job_does_not_run_faster).
50+
This is explained on the [troubleshooting page](troubleshooting.md#job_does_not_run_faster).
5151

5252
## Parallel Computing with threads
5353

mkdocs/docs/HPC/multi_job_submission.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,11 @@ a parameter instance is called a work item in Worker parlance.
159159
```
160160
module swap cluster/donphan
161161
```
162-
We recommend using a `module swap cluster` command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed [here](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster).
162+
163+
We recommend using a `module swap cluster` command after submitting the jobs.
164+
Additional information about this as well as more comprehensive details
165+
concerning the 'Illegal instruction' error can be found
166+
on [the troubleshooting page](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster).
163167

164168
## The Worker framework: Job arrays
165169
[//]: # (sec:worker-framework-job-arrays)

mkdocs/docs/HPC/only/gent/2023/donphan-gallade.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ For software installation requests, please use the [request form](https://www.ug
1515

1616
`donphan` is the new debug/interactive cluster.
1717

18-
It replaces `slaking`, which will be retired on **Monday 22 May 2023**.
18+
It replaces `slaking`, which was retired on **Monday 22 May 2023**.
1919

2020
It is primarily intended for interactive use: interactive shell sessions, using GUI applications through the
2121
[HPC-UGent web portal](../../../web_portal.md), etc.
@@ -135,4 +135,6 @@ a `gallade` workernode has 128 cores (so ~7.3 GiB per core on average), while a
135135
(so ~20.5 GiB per core on average).
136136

137137
It is important to take this aspect into account when submitting jobs to `gallade`, especially when requesting
138-
all cores via `ppn=all`. You may need to explictly request more memory (see also [here](../../../fine_tuning_job_specifications#pbs_mem)).
138+
all cores via `ppn=all`.
139+
You may need to explictly request more memory by
140+
[setting the memory parameter](../../../fine_tuning_job_specifications#pbs_mem).

mkdocs/docs/HPC/running_batch_jobs.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -833,7 +833,9 @@ The output of the various commands interacting with jobs (`qsub`,
833833
It is possible to submit jobs from a job to a cluster different than the one your job is running on.
834834
This could come in handy if, for example, the tool used to submit jobs only works on a particular cluster
835835
(or only on the login nodes), but the jobs can be run on several clusters.
836-
An example of this is the `wsub` command of `worker`, see also [here](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster).
836+
An example of this is the `wsub` command of `worker`.
837+
More info on these commands is in the document on [multi job submission](multi_job_submission.md)
838+
or on the [troubleshooting page](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster).
837839

838840
To submit jobs to the `{{othercluster}}` cluster, you can change only what is needed in your session environment
839841
to submit jobs to that particular cluster by using `module swap env/slurm/{{othercluster}}` instead of using

mkdocs/docs/HPC/setting_up_python_virtual_environments.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -363,7 +363,8 @@ $ python
363363
Illegal instruction (core dumped)
364364
```
365365

366-
we are presented with the illegal instruction error. More info on this [here](troubleshooting.md#illegal-instruction-error)
366+
we are presented with the illegal instruction error.
367+
More info on this on the [troubleshooting page](troubleshooting.md#illegal-instruction-error)
367368

368369
### Error: GLIBC not found
369370

0 commit comments

Comments
 (0)