-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
The system received a significant software and hardware restructuring. All file system paths (even the file system itself) have changed, plus there is a new batch scheduler and partition structure so that the config and the examples here are now heavily outdated.
Until the NESH material is updated, you might play around with a SLURMCluster() specification yourself (without specifying the NESH config!), or use a Dask distributed LocalCluster (which limits your calculations to one compute node, though). For the latter, please open a JupyterLab session on a compute node with e.g. this script like
$ sbatch --cpus-per-task=32 --mem=180GB --time=01:00:00 jupyterlab-slurm.sh
$ sbatch --cpus-per-task=16 --mem=90GB --time=01:00:00 jupyterlab-slurm.sh
to request e.g. a full or half a compute node for e.g. one hour.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels