Skip to content

Commit 39fb64a

Browse files
authored
Merge pull request #1204 from astronomy-commons/lsdb_1084
Add memory_limit=None section
2 parents 169c3d4 + 6c13073 commit 39fb64a

File tree

1 file changed

+16
-1
lines changed

1 file changed

+16
-1
lines changed

docs/tutorials/dask-cluster-tips.rst

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,22 @@ memory allocation may need to be increased accordingly.
9797
9898
new_catalog = catalog.map_partitions(my_func)
9999
100+
How you configure Dask memory limits also depends on where you are running.
101+
On managed platforms where the scheduler enforces a strict memory cap for the
102+
entire allocation (for example, a SLURM job with a fixed memory request),
103+
setting ``memory_limit=None`` on the client can be a practical strategy. This
104+
removes per-worker caps so a single worker can temporarily use more memory to
105+
gather results near the end of a workflow, while the overall job is still
106+
bounded by the scheduler. On unmanaged shared nodes, avoid this setting because
107+
it can allow a single job to overrun the machine and disrupt other users.
108+
109+
.. code-block:: python
110+
111+
from dask.distributed import Client
112+
113+
# Use only on managed allocations with a strict total memory limit.
114+
client = Client(n_workers=8, threads_per_worker=1, memory_limit=None)
115+
100116
101117
102118
Multiple Node Cluster
@@ -225,4 +241,3 @@ Understanding common Dask errors and warnings
225241
.............................................
226242

227243
:doc:`Dask Messages Guide </tutorials/dask-messages-guide>`
228-

0 commit comments

Comments
 (0)