Skip to content

Commit 8fb0f37

Browse files
committed
Merge branch 'main' into rewrite-linux-tutorial
1 parent a540e97 commit 8fb0f37

File tree

28 files changed

+281
-340
lines changed

28 files changed

+281
-340
lines changed

intro-HPC/examples/Compiling-and-testing-your-software-on-the-HPC/mpihello.pbs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,6 @@ cd $PBS_O_WORKDIR
1212
# load the environment
1313

1414
module purge
15-
module load intel
15+
module load foss
1616

1717
mpirun ./mpihello

intro-HPC/examples/HPC-UGent-GPU-clusters/TensorFlow_GPU.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
#PBS -l walltime=5:0:0
33
#PBS -l nodes=1:ppn=quarter:gpus=1
44

5-
module load TensorFlow/2.6.0-foss-2021a-CUDA-11.3.1
5+
module load TensorFlow/2.11.0-foss-2022a-CUDA-11.7.0
66

77
cd $PBS_O_WORKDIR
88
python example.py

intro-HPC/examples/Job-script-examples/multi_core.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
#PBS -N mpi_hello ## job name
33
#PBS -l nodes=2:ppn=all ## 2 nodes, all cores per node
44
#PBS -l walltime=2:00:00 ## max. 2h of wall time
5-
module load intel/2017b
5+
module load foss/2023a
66
module load vsc-mympirun ## We don't use a version here, this is on purpose
77
# go to working directory, compile and run MPI hello world
88
cd $PBS_O_WORKDIR

intro-HPC/examples/Job-script-examples/single_core.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
#PBS -N count_example ## job name
33
#PBS -l nodes=1:ppn=1 ## single-node job, single core
44
#PBS -l walltime=2:00:00 ## max. 2h of wall time
5-
module load Python/3.6.4-intel-2018a
5+
module load Python/3.11.3-GCCcore-12.3.0
66
# copy input data from location where job was submitted from
77
cp $PBS_O_WORKDIR/input.txt $TMPDIR
88
# go to temporary working directory (on local disk) & run

intro-HPC/examples/MATLAB/jobscript.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
#
88

99
# make sure the MATLAB version matches with the one used to compile the MATLAB program!
10-
module load MATLAB/2018a
10+
module load MATLAB/2022b-r5
1111

1212
# use temporary directory (not $HOME) for (mostly useless) MATLAB log files
1313
# subdir in $TMPDIR (if defined, or /tmp otherwise)

intro-HPC/examples/Multi-core-jobs-Parallel-Computing/mpi_hello.pbs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,6 @@ cd $PBS_O_WORKDIR
1111

1212
# load the environment
1313

14-
module load intel
14+
module load foss
1515

1616
mpirun ./mpi_hello

intro-HPC/examples/OpenFOAM/OpenFOAM_damBreak.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
#PBS -l walltime=1:0:0
33
#PBS -l nodes=1:ppn=4
44
# check for more recent OpenFOAM modules with 'module avail OpenFOAM'
5-
module load OpenFOAM/6-intel-2018a
5+
module load OpenFOAM/11-foss-2023a
66
source $FOAM_BASH
77
# purposely not specifying a particular version to use most recent mympirun
88
module load vsc-mympirun
@@ -15,7 +15,7 @@ export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI
1515
export WORKDIR=$VSC_SCRATCH_NODE/$PBS_JOBID # for single-node jobs
1616
mkdir -p $WORKDIR
1717
# damBreak tutorial, see also https://cfd.direct/openfoam/user-guide/dambreak
18-
cp -r $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/damBreak $WORKDIR
18+
cp -r $FOAM_TUTORIALS/incompressibleVoF/damBreakLaminar/damBreak $WORKDIR
1919
cd $WORKDIR/damBreak
2020
echo "working directory: $PWD"
2121
# pre-processing: generate mesh

intro-HPC/examples/Program-examples/04_MPI_C/mpihello.pbs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,6 @@ cd $PBS_O_WORKDIR
1313
# load the environment
1414

1515
module purge
16-
module load intel
16+
module load foss
1717

1818
mpirun ./mpihello

mkdocs/docs/HPC/FAQ.md

Lines changed: 50 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ It is possible to use the modules without specifying a version or toolchain. How
7474
this will probably cause incompatible modules to be loaded. Don't do it if you use multiple modules.
7575
Even if it works now, as more modules get installed on the HPC, your job can suddenly break.
7676

77-
## Troubleshooting jobs
77+
## Troubleshooting
7878

7979
### My modules don't work together
8080

@@ -226,6 +226,29 @@ information, see .
226226

227227
{% endif %}
228228

229+
230+
### Why do I get a "No space left on device" error, while I still have storage space left?
231+
232+
When trying to create files, errors like this can occur:
233+
234+
```shell
235+
No space left on device
236+
```
237+
238+
The error "`No space left on device`" can mean two different things:
239+
240+
- all available *storage quota* on the file system in question has been used;
241+
- the *inode limit* has been reached on that file system.
242+
243+
An *inode* can be seen as a "file slot", meaning that when the limit is reached, no more additional files can be created.
244+
There is a standard inode limit in place that will be increased if needed.
245+
The number of inodes used per file system can be checked on [the VSC account page](https://account.vscentrum.be).
246+
247+
Possible solutions to this problem include cleaning up unused files and directories or
248+
[compressing directories with a lot of files into zip- or tar-files](linux-tutorial/manipulating_files_and_directories.md#zipping-gzipgunzip-zipunzip).
249+
250+
If the problem persists, feel free to [contact support](FAQ.md#i-have-another-questionproblem).
251+
229252
## Other
230253

231254
### Can I share my account with someone else?
@@ -350,6 +373,32 @@ See also: [Your UGent home drive and shares](running_jobs_with_input_output_data
350373
{% endif %}
351374

352375

376+
### My home directory is (almost) full, and I don't know why
377+
378+
Your home directory might be full without looking like it due to hidden files.
379+
Hidden files and subdirectories have a name starting with a dot and do not show up when running `ls`.
380+
If you want to check where the storage in your home directory is used, you can make use of the [`du` command](running_jobs_with_input_output_data.md#check-your-quota) to find out what the largest files and subdirectories are:
381+
382+
```shell
383+
du -h --max-depth 1 $VSC_HOME | egrep '[0-9]{3}M|[0-9]G'
384+
```
385+
386+
The `du` command returns the size of every file and subdirectory in the $VSC_HOME directory. This output is then piped into an [`egrep`](linux-tutorial/beyond_the_basics.md#searching-file-contents-grep) to filter the lines to the ones that matter the most.
387+
388+
The `egrep` command will only let entries that match with the specified regular expression `[0-9]{3}M|[0-9]G` through, which corresponds with files that consume more than 100 MB.
389+
390+
391+
### How can I get more storage space?
392+
393+
394+
[By default](running_jobs_with_input_output_data.md#quota) you get 3 GB of storage space for your home directory and 25 GB in your personal directories on both the data (`$VSC_DATA`) and scratch (`$VSC_SCRATCH`) filesystems.
395+
It is not possible to expand the storage quota for these personal directories.
396+
397+
You can get more storage space through a [Virtual Organisation (VO)](running_jobs_with_input_output_data.md#virtual-organisations),
398+
which will give you access to the [additional directories](running_jobs_with_input_output_data.md#vo-directories) in a subdirectory specific to that VO (`$VSC_DATA_VO` and `$VSC_SCRATCH_VO`).
399+
The moderators of a VO can [request more storage](running_jobs_with_input_output_data.md#requesting-more-storage-space) for their VO.
400+
401+
353402
### Why can't I use the `sudo` command?
354403

355404
When you attempt to use sudo, you will be prompted for a password.

mkdocs/docs/HPC/MATLAB.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ license, licenses would quickly run out.
3131

3232
Compiling MATLAB code can only be done from the login nodes, because
3333
only login nodes can access the MATLAB license server, workernodes on
34-
clusters can not.
34+
clusters cannot.
3535

3636
To access the MATLAB compiler, the `MATLAB` module should be loaded
3737
first. Make sure you are using the same `MATLAB` version to compile and
@@ -93,7 +93,7 @@ with:
9393
<pre><code>$ <b>export _JAVA_OPTIONS="-Xmx64M"</b>
9494
</code></pre>
9595

96-
The MATLAB compiler spawns multiple Java processes, and because of the
96+
The MATLAB compiler spawns multiple Java processes. Because of the
9797
default memory limits that are in effect on the login nodes, this might
9898
lead to a crash of the compiler if it's trying to create to many Java
9999
processes. If we lower the heap size, more Java processes will be able
@@ -122,7 +122,7 @@ controlled via the `parpool` function: `parpool(16)` will use 16
122122
workers. It's best to specify the amount of workers, because otherwise
123123
you might not harness the full compute power available (if you have too
124124
few workers), or you might negatively impact performance (if you have
125-
too much workers). By default, MATLAB uses a fixed number of workers
125+
too many workers). By default, MATLAB uses a fixed number of workers
126126
(12).
127127

128128
You should use a number of workers that is equal to the number of cores
@@ -163,7 +163,7 @@ You should remove the directory at the end of your job script:
163163
## Cache location
164164

165165
When running, MATLAB will use a cache for performance reasons. This
166-
location and size of this cache can be changed trough the
166+
location and size of this cache can be changed through the
167167
`MCR_CACHE_ROOT` and `MCR_CACHE_SIZE` environment variables.
168168

169169
The snippet below would set the maximum cache size to 1024MB and the

0 commit comments

Comments
 (0)