Skip to content

Commit 21fec65

Browse files
pytorchbotatalmanmsaroufim
authored
Use linux.g4dn.4xlarge.nvidia.gpu for cuda 12.4 legacy driver tests (pytorch#164172)
Use linux.g4dn.4xlarge.nvidia.gpu for cuda 12.4 legacy driver tests (pytorch#163956) Workaround for pytorch#163658 Looks like the workflow passes on 12.8 build that use inux.g4dn.4xlarge.nvidia.gpu but its failing on 12.6 builds that use linux.4xlarge.nvidia.gpu: https://github.com/pytorch/pytorch/actions/runs/17953843505/job/51080623612#step:13:470 Pull Request resolved: pytorch#163956 Approved by: https://github.com/malfet (cherry picked from commit 349c960) Co-authored-by: atalman <[email protected]> Co-authored-by: Mark Saroufim <[email protected]>
1 parent 22d46b5 commit 21fec65

File tree

2 files changed

+7
-8
lines changed

2 files changed

+7
-8
lines changed

.github/workflows/periodic.yml

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,14 @@ jobs:
5959
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
6060
build-environment: linux-jammy-cuda12.4-py3.10-gcc11
6161
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.4-cudnn9-py3-gcc11
62+
cuda-arch-list: 7.5
6263
test-matrix: |
6364
{ include: [
64-
{ config: "legacy_nvidia_driver", shard: 1, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge.nvidia.gpu" },
65-
{ config: "legacy_nvidia_driver", shard: 2, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge.nvidia.gpu" },
66-
{ config: "legacy_nvidia_driver", shard: 3, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge.nvidia.gpu" },
67-
{ config: "legacy_nvidia_driver", shard: 4, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge.nvidia.gpu" },
68-
{ config: "legacy_nvidia_driver", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.4xlarge.nvidia.gpu" },
65+
{ config: "legacy_nvidia_driver", shard: 1, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.4xlarge.nvidia.gpu" },
66+
{ config: "legacy_nvidia_driver", shard: 2, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.4xlarge.nvidia.gpu" },
67+
{ config: "legacy_nvidia_driver", shard: 3, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.4xlarge.nvidia.gpu" },
68+
{ config: "legacy_nvidia_driver", shard: 4, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.4xlarge.nvidia.gpu" },
69+
{ config: "legacy_nvidia_driver", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.4xlarge.nvidia.gpu" },
6970
]}
7071
secrets: inherit
7172

test/test_cuda.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7045,9 +7045,7 @@ def test_graph_external_wait_and_record(self):
70457045
"""
70467046
from torch.cuda import _compile_kernel
70477047

7048-
spin_wait_kernel = _compile_kernel(
7049-
kernel_source, "wait_for_cpu", compute_capability="70"
7050-
)
7048+
spin_wait_kernel = _compile_kernel(kernel_source, "wait_for_cpu")
70517049

70527050
x = torch.ones(4, device="cuda")
70537051
x_cpu = torch.zeros(x.shape, device="cpu").pin_memory()

0 commit comments

Comments
 (0)