Welcome to the 2026 Slurm Workshop! This 4-hour session is designed to help you move beyond basic job submission and master efficient, scalable computing on the University Cluster.
- Viswanathan Satheesh
- Rick Masonbrink
- Sharu Paul Sharma
If you have any questions/suggestions: gifhelp@iastate.edu
- Understand how to measure and optimize job efficiency (
seff). - Learn when to use GNU Parallel vs. Slurm Job Arrays.
- Master the syntax for running thousands of jobs without crashing the scheduler.
graph LR
root((Slurm Workshop)) --> P1[Part 1: Scripts]
root --> P2[Part 2: Batch Jobs]
root --> P3[Part 3: Arrays]
root --> P4[Part 4: Strategy]
P1 --> W[Workspace Setup]
P1 --> B[Benchmarking]
P1 --> G[GNU Parallel]
P2 --> S[SLURM Basics]
P2 --> J[Job Submission]
P2 --> E[Efficiency / seff]
P3 --> A[Job Arrays]
P3 --> T[Task Logs]
P3 --> F[Failure Recovery]
P4 --> D[Decision Matrix]
P4 --> R[Real-World Scenarios]
The workshop material is split into two halves, optimized for a 4-hour session:
- Part 1: Building and Benchmarking Scripts - Setting up the workspace, analyzing running times, and migrating from sequential loops to GNU Parallel.
- Part 2: Introduction to SLURM & Batch Jobs - Core concepts, submitting
sbatchscripts, assessing resource efficiency withseff, and tracking down errors in failed jobs.
- Part 3: Scaling with SLURM Arrays - Replacing manual loops with Job Arrays, mapping inputs to array indices, recording separate logs, and isolating failed tasks.
- Part 4: Strategy, Flowcharts & Wrap-up - Using the decision matrix to pick the right strategy for your pipeline—whether that's GNU Parallel, SLURM arrays, or task grouping.
- Connect to the cluster via VSCode OnDemand.
- Follow the setup and preamble instructions at the very beginning of Part 1 & 2 to initialize your workspace and start the workshop!