Skip to content

[TBD] Review batching for GPU jobs #2856

@tcompa

Description

@tcompa

For GPU jobs, we should turn off batching (as in "multiply CPU&memory requirements by the number of concurrent tasks") and only support internal queues (as in "I need to pack 5 identical tasks in a same SLURM job, and they will run sequentially")

Note: in principle we could also drop the srun xx & srun yyy & wait pattern, and move to simply listing the commands to be run. But not having sruns any more would be a bit annoying because we use them when we parse&analyze Fractal SLURM usage.
TBD.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions