You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all a wanted to congratulate the developers at large. This software fills that small but fundamental gap that most of us have always filled with endless and unsustainable bash scripts.
I work in HPC and would like more control with respect to the number of cores each task in a process uses, for example to study scalability. I couldn't find anything like this, so if not present maybe with some help I would gladly add this feature.
Same thing holds for singularity with MPI parallelism, where currently the script under the 'script:' portion is run as
singularity exec $script
whereas it should be run as
srun -n $TASKS singularity exec "script"
If anyone has found a workaround, please let me know.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
First of all a wanted to congratulate the developers at large. This software fills that small but fundamental gap that most of us have always filled with endless and unsustainable bash scripts.
I work in HPC and would like more control with respect to the number of cores each task in a process uses, for example to study scalability. I couldn't find anything like this, so if not present maybe with some help I would gladly add this feature.
Same thing holds for singularity with MPI parallelism, where currently the script under the 'script:' portion is run as
singularity exec $script
whereas it should be run as
srun -n $TASKS singularity exec "script"
If anyone has found a workaround, please let me know.
Best,
Nicolas
Beta Was this translation helpful? Give feedback.
All reactions