|
17 | 17 | :header-rows: 1 |
18 | 18 | :delim: ! |
19 | 19 |
|
20 | | - Command ! Option ! Description |
21 | | - ``sbatch``/``srun``/etc ! ``-t``, ``--time=``\ *HH:MM:SS* ! **time limit** |
22 | | - ! ``-t, --time=``\ *DD-HH* ! **time limit, days-hours** |
23 | | - ! ``-p, --partition=``\ *PARTITION*! **job partition. Usually leave off and things are auto-detected.** |
24 | | - ! ``--mem-per-cpu=``\ *N* ! **request n MB of memory per core** |
25 | | - ! ``--mem=``\ *N* ! **request n MB memory per node** |
26 | | - ! ``-c``, ``--cpus-per-task=``\ *N* ! **Allocate *n* CPU's for each task. For multithreaded jobs. (compare ``--ntasks``: ``-c`` means the number of cores for each process started.)** |
27 | | - ! ``-N``, ``--nodes=``\ *N-M* ! allocate minimum of n, maximum of m nodes. |
28 | | - ! ``-n``, ``--ntasks=``\ *N* ! allocate resources for and start *n* tasks (one task=one process started, it is up to you to make them communicate. However the main script runs only on first node, the sub-processes run with "srun" are run this many times.) |
29 | | - ! ``-J``, ``--job-name=``\ *NAME* ! short job name |
30 | | - ! ``-o`` *OUTPUTFILE* ! print output into file *output* |
31 | | - ! ``-e`` *ERRORFILE* ! print errors into file *error* |
| 20 | + Command ! Option ! Description |
| 21 | + ``sbatch``/``srun``/etc ! ``-t``, ``--time=HH:MM:SS`` ! **time limit** |
| 22 | + ! ``-t``, ``--time=DD-HH`` ! **time limit, days-hours** |
| 23 | + ! ``-p PARTITION``, ``--partition=PARTITION`` ! **job partition. Usually leave off and things are auto-detected.** |
| 24 | + ! ``--mem-per-cpu=N`` ! **request N MB of memory per core** |
| 25 | + ! ``--mem=N`` ! **request N MB memory per node** |
| 26 | + ! ``-c``, ``--cpus-per-task=N`` ! **Allocate *n* CPU's for each task. For multithreaded jobs. (compare ``--ntasks``: ``-c`` means the number of cores for each process started.)** |
| 27 | + ! ``-N``, ``--nodes=N-M`` ! allocate minimum of N, maximum of M nodes. |
| 28 | + ! ``-n``, ``--ntasks=N`` ! allocate resources for and start *n* tasks (one task=one process started, it is up to you to make them communicate. However the main script runs only on first node, the sub-processes run with "srun" are run this many times.) |
| 29 | + ! ``--gpus=1`` ! request a GPU, or ``--gpus=N`` for multiple |
| 30 | + ! ``--gres=min-vram:NNg`` ! request GPUs with at least ``NN`` GB of VRAM. To combine with other ``--gres`` options, use ``--gres=min-vram:NNg,min-cuda-cc=NN``. |
| 31 | + ! ``--gres=min-cuda-cc:NN`` ! request GPUs with CUDA compute capability of at least N.N. See above for combining with other GRES. |
| 32 | + ! ``-J``, ``--job-name=NAME`` ! short job name |
| 33 | + ! ``-o OUTPUTFILE`` ! print output into file *output* |
| 34 | + ! ``-e ERRORFILE`` ! print errors into file *error* |
32 | 35 | ! ``--exclusive`` ! allocate exclusive access to nodes. For large parallel jobs. |
33 | | - ! ``--constraint=``\ *FEATURE* ! request *feature* (see ``slurm features`` for the current list of configured features, or Arch under the :ref:`hardware list <hardware-list>`). Multiple with ``--constraint="hsw|skl"``. |
| 36 | + ! ``--constraint=FEATURE`` ! request *feature* (see ``slurm features`` for the current list of configured features, or Arch under the :ref:`hardware list <hardware-list>`). Multiple with ``--constraint="hsw|skl"``. |
34 | 37 | ! ``--constraint=localdisk`` ! request nodes that have local disks |
35 | 38 | ! ``--tmp=nnnG`` ! Request ``nnn`` GB of :doc:`local disk storage space </triton/usage/localstorage>` |
36 | | - ! ``--array=``\ *0-5,7,10-15* ! Run job multiple times, use variable ``$SLURM_ARRAY_TASK_ID`` to adjust parameters. |
37 | | - ! ``--gpus=1`` ! request a GPU, or ``--gpus=N`` for multiple |
38 | | - ! ``--mail-type=``\ *TYPE* ! notify of events: ``BEGIN``, ``END``, ``FAIL``, ``ALL``, ``REQUEUE`` (not on triton) or ``ALL.`` MUST BE used with ``--mail-user=`` only |
39 | | - ! ``--mail-user=``\ *first.last@aalto.fi* ! Aalto email to send the notification about the job. External email addresses doesn't work. |
40 | | - ``srun`` ! ``-N`` *N_NODES* hostname ! Print allocated nodes (from within script) |
| 39 | + ! ``--array=0-5,7,10-15` ` ! Run job multiple times, use variable ``$SLURM_ARRAY_TASK_ID`` to adjust parameters. |
| 40 | + ! ``--mail-type=TYPE`` ! notify of events: ``BEGIN``, ``END``, ``FAIL``, ``ALL``, ``REQUEUE`` (not on triton) or ``ALL.`` MUST BE used with ``--mail-user=`` only |
| 41 | + ! ``--mail-user=first.last@aalto.fi`` ! Aalto email to send the notification about the job. External email addresses doesn't work. |
| 42 | + ``srun`` ! ``-N N_NODES hostname`` ! Print allocated nodes (from within script) |
0 commit comments