Skip to content

Commit 4d40f68

Browse files
committed
More local disks updates
1 parent f8a1359 commit 4d40f68

File tree

4 files changed

+11
-8
lines changed

4 files changed

+11
-8
lines changed

triton/ref/slurm.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@
3232
! ``--exclusive`` ! allocate exclusive access to nodes. For large parallel jobs.
3333
! ``--constraint=``\ *FEATURE* ! request *feature* (see ``slurm features`` for the current list of configured features, or Arch under the :ref:`hardware list <hardware-list>`). Multiple with ``--constraint="hsw|skl"``.
3434
! ``--constraint=localdisk`` ! request nodes that have local disks
35+
! ``--tmp=nnnG`` ! Request ``nnn`` GB of :doc:`local disk storage space </triton/usage/localstorage>`
3536
! ``--array=``\ *0-5,7,10-15* ! Run job multiple times, use variable ``$SLURM_ARRAY_TASK_ID`` to adjust parameters.
3637
! ``--gres=gpu`` ! request a GPU, or ``--gres=gpu:``\ *n* for multiple
3738
! ``--mail-type=``\ *TYPE* ! notify of events: ``BEGIN``, ``END``, ``FAIL``, ``ALL``, ``REQUEUE`` (not on triton) or ``ALL.`` MUST BE used with ``--mail-user=`` only

triton/ref/storage.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,5 @@
66
Home | ``$HOME`` or ``/home/USERNAME/`` | hard quota 10GB | Nightly | all nodes | Small user specific files, no calculation data.
77
Work | ``$WRKDIR`` or ``/scratch/work/USERNAME/`` | 200GB and 1 million files | x | all nodes | Personal working space for every user. Calculation data etc. Quota can be increased on request.
88
Scratch | ``/scratch/DEPT/PROJECT/`` | on request | x | all nodes | Department/group specific project directories.
9-
Local temp | ``/tmp/`` (nodes with disks only) | local disk size | x | single-node | (Usually fastest) place for single-node calculation data. Removed once user's jobs are finished on the node. Request with ``--constraint=localdisk``.
10-
ramfs | ``/dev/shm/`` (and ``/tmp/`` on diskless nodes) | limited by memory | x | single-node | Very fast but small in-memory filesystem
9+
:doc:`Local temp (disk) </triton/usage/localstorage>` | ``/tmp/`` (nodes with disks only) | local disk size | x | single-node | (Usually fastest) place for single-node calculation data. Removed once user's jobs are finished on the node. Request with ``--tmp=nnnG`` or ``--constraint=localdisk``.
10+
:doc:`Local temp (ramfs) </triton/usage/localstorage>` | ``/dev/shm/`` (and ``/tmp/`` on diskless nodes) | limited by memory | x | single-node | Very fast but small in-memory filesystem

triton/tut/storage.rst

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -152,11 +152,9 @@ Local disks
152152
with single-node jobs and is cleaned up after job is finished. Not
153153
all nodes have them: some don't have any disks, and ``/tmp`` is also
154154
in-memory ramfs. If you want to ensure you have local storage,
155-
submit your job with ``--constraint=localdisk``.
155+
submit your job with ``--constraint=localdisk`` and/or ``--tmp=nnnG``.
156156

157-
See the :doc:`Compute
158-
node local drives <../usage/localstorage>` page for further details and script
159-
examples.
157+
See :doc:`../usage/localstorage`
160158

161159
.. _ramfs-description:
162160

@@ -170,7 +168,11 @@ temporary files that don't need to last long. Note that this is no
170168
different than just holding the data in memory, if you can hold in
171169
memory that's better.
172170

173-
ramfs counts against the memory of your job or user session.
171+
ramfs counts against the memory of your job or user session, so your
172+
``--mem`` value must be increased based on your storage space.
173+
174+
See :doc:`../usage/localstorage` (which also covers ramfs).
175+
174176

175177
Other Aalto data storage locations
176178
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

triton/usage/localstorage.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ double check from within the cluster, you can verify node info with
6161
tens of TB.
6262

6363
You have to use ``--constraint=localdisk`` to ensure that you get a
64-
hard disk. You can use ``--tmp=NNNG`` (for example ``--tmp=100G``) to
64+
hard disk. You can use ``--tmp=nnnG`` (for example ``--tmp=100G``) to
6565
request a node with at least that much temporary space. But,
6666
``--tmp`` doesn't allocate this space just for you: it's shared among
6767
all users, including those which didn't request storage space. So,

0 commit comments

Comments
 (0)