Skip to content

Commit 483720d

Browse files
authored
Merge pull request #762 from AaltoSciComp/gpu_update
Updating GPU table
2 parents db79c6c + 5ca3e78 commit 483720d

File tree

1 file changed

+14
-10
lines changed

1 file changed

+14
-10
lines changed

triton/ref/gpu.rst

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,18 @@
33
:header-rows: 1
44

55
Card | Slurm partition (``--partition=``) | Slurm feature name (``--constraint=``) | Slurm gres name (``--gres=gpu:NAME:n``) | total amount | nodes | architecture | compute threads per GPU | memory per card | CUDA compute capability
6-
Tesla K80\* | *Not available* | ``kepler`` | ``teslak80`` | 12 | gpu[20-22] | Kepler | 2x2496 | 2x12GB | 3.7
7-
Tesla P100 | ``gpu-p100-16g`` | ``pascal`` | ``teslap100`` | 20 | gpu[23-27] | Pascal | 3854 | 16GB | 6.0
8-
Tesla V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 40 | gpu[1-10] | Volta | 5120 | 32GB | 7.0
9-
Tesla V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 40 | gpu[28-37] | Volta | 5120 | 32GB | 7.0
10-
Tesla V100 | ``gpu-v100-16g`` | ``volta`` | ``v100`` | 16 | dgx[1-2] | Volta | 5120 | 16GB | 7.0
11-
Tesla V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 16 | dgx[3-7] | Volta | 5120 | 32GB | 7.0
12-
Tesla A100 | ``gpu-a100-80g`` | ``ampere`` | ``a100`` | 56 | gpu[11-17,38-44] | Ampere | 7936 | 80GB | 8.0
13-
Tesla H100 | ``gpu-h100-80g`` | ``hopper`` | ``h100`` | 16 | gpu[45-48] | Hopper | 16896 | 80GB | 9.0
14-
Tesla H200 | ``gpu-h200-18g-ia`` | ``hopper`` | ``h200-18g`` | 56 | gpu[49] | Hopper | | 18GB | 9.0
15-
Tesla H200 | ``gpu-h200-141g`` | ``hopper`` | ``h200`` | 16 | gpu[50-51] | Hopper | | 141GB | 9.0
6+
NVIDIA K80\* | *Not available* | ``kepler`` | ``teslak80`` | 12 | gpu[20-22] | Kepler | 2x2496 | 2x12GB | 3.7
7+
NVIDIA P100 | ``gpu-p100-16g`` | ``pascal`` | ``teslap100`` | 20 | gpu[23-27] | Pascal | 3854 | 16GB | 6.0
8+
NVIDIA V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 40 | gpu[1-10] | Volta | 5120 | 32GB | 7.0
9+
NVIDIA V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 40 | gpu[28-37] | Volta | 5120 | 32GB | 7.0
10+
NVIDIA V100 | ``gpu-v100-16g`` | ``volta`` | ``v100`` | 16 | dgx[1-2] | Volta | 5120 | 16GB | 7.0
11+
NVIDIA V100 | ``gpu-v100-32g`` | ``volta`` | ``v100`` | 16 | dgx[3-7] | Volta | 5120 | 32GB | 7.0
12+
NVIDIA A100 | ``gpu-a100-80g`` | ``ampere`` | ``a100`` | 56 | gpu[11-17,38-44] | Ampere | 7936 | 80GB | 8.0
13+
NVIDIA H100 | ``gpu-h100-80g`` | ``hopper`` | ``h100`` | 16 | gpu[45-48] | Hopper | 16896 | 80GB | 9.0
14+
NVIDIA H200 | ``gpu-h200-18g-ia`` | ``hopper`` | ``h200-18g`` | 56 | gpu[49] | Hopper | | 18GB | 9.0
15+
NVIDIA H200(*) | ``gpu-h200-141g-ellis``, ``gpu-h200-141g-short`` | ``hopper`` | ``h200`` | 16 | gpu[50-51] | Hopper | | 141GB | 9.0
1616
AMD MI100 (testing) | *Not yet installed* | ``mi100`` | Use ``-p gpu-amd`` only, no ``--gres`` | | gpuamd[1] |
17+
18+
(*) These GPUs have a priority queue for the ellis project, since they were procured for this project.
19+
Any job submitted to the short queue might be preempted if a job requiring the resources comes in from the ellis queue.
20+
They are not allocated automatcally unless you specifically request a job on their partition.

0 commit comments

Comments
 (0)