Skip to content

Commit d1c0e91

Browse files
authored
Merge pull request hpcugent#1042 from boegel/fix_links_infrastructure
fix broken internal links on infrastructure page
2 parents 0b0629c + 867e353 commit d1c0e91

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

mkdocs/docs/HPC/infrastructure.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ Science and Innovation (EWI).
1313
Log in to the HPC-UGent Tier-2 infrastructure via [https://login.hpc.ugent.be](https://login.hpc.ugent.be)
1414
or using SSH via `login.hpc.ugent.be`.
1515

16-
more info on using the web portal you can find [here](web_portal),
17-
and about connection with SSH [here](connecting).
16+
More info on using the web portal you can find [here](web_portal.md),
17+
and about connection with SSH [here](connecting.md).
1818

1919
## Tier-2 compute clusters
2020

@@ -29,11 +29,9 @@ For basic information on using these clusters, see our
2929
| ***cluster name*** | ***# nodes*** | ***Processor architecture*** | ***Usable memory/node*** | ***Local diskspace/node*** | ***Interconnect*** | ***Operating system*** |
3030
| --- | --- | --- | --- | --- | --- | --- |
3131
| doduo (default cluster) | 128 | 2x 48-core AMD EPYC 7552 (Rome @ 2.2 GHz) | 250 GiB | 180GB SSD | HDR-100 InfiniBand | RHEL 9 |
32-
| gallade (*) | 16 | 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) | 940 GiB | 1.5 TB NVME | HDR-100 InfiniBand | RHEL 9 |
32+
| gallade | 16 | 2x 64-core AMD EPYC 7773X (Milan-X @ 2.2 GHz) | 940 GiB | 1.5 TB NVME | HDR-100 InfiniBand | RHEL 9 |
3333
| shinx | 48 | 2x 96-core AMD EPYC 9654 (Genoa @ 2.4 GHz) | 370 GiB | 500GB NVME | NDR-200 InfiniBand | RHEL 9 |
3434

35-
(*) also see this [extra information](./only/gent/2023/donphan-gallade#gallade-large-memory-cluster)
36-
3735
### Interactive debug cluster
3836

3937

@@ -42,17 +40,18 @@ where you should always be able to get a job running quickly,
4240
**without waiting in the queue**.
4341

4442
Intended usage is mainly for interactive work,
45-
either via an interactive job or using the [HPC-UGent web portal](web_portal).
43+
either via an interactive job or using the [HPC-UGent web portal](web_portal.md).
4644

4745
This cluster is heavily over-provisioned, so jobs may
4846
run slower if the cluster is used more heavily.
4947

5048
Strict limits are in place per user:
49+
5150
* max. 5 jobs in queue
5251
* max. 3 jobs running
5352
* max. of 8 cores and 27GB of memory in total for running jobs
5453

55-
For more information, see our [documentation](interactive_gent).
54+
For more information, see our [documentation](interactive_debug.md).
5655

5756
| ***cluster name*** | ***# nodes*** | ***Processor architecture*** | ***Usable memory/node*** | ***Local diskspace/node*** | ***Interconnect*** | ***Operating system*** |
5857
| --- | --- | --- | --- | --- | --- | --- |
@@ -88,7 +87,7 @@ For more information on using these clusters, see our documentation.
8887

8988
^ Storage space for a group of users (Virtual Organisation or VO for short) can be
9089
increased significantly on request. For more information, see our
91-
[documentation](running_jobs_with_input_output_data#virtual-organisations).
90+
[documentation](running_jobs_with_input_output_data.md#virtual-organisations).
9291

9392
## Infrastructure status
9493

0 commit comments

Comments
 (0)