Skip to content

Commit da6e705

Browse files
authored
[Test][XPU] Added gpu cache cleaning for XPU devices (#917)
## Summary Enable GPU cache cleaning for XPU devices ## Details I discovered this issue while working on intel/intel-xpu-backend-for-triton#5292 Basically testing on XPU was hanging indefinitely, but it was not due to a specific test. Hanging only occurred if many tests were run sequentially. In my local experiments on Intel Data Center GPU Max 1100 this patch resolves hanging issue. The patch is small and unlikely to cause problems for cuda. ## Testing Done `pytest test` with Intel Data Center GPU Max 1100 - Hardware Type: Intel Data Center GPU Max 1100 - [x] run `make test` to ensure correctness - [x] run `make checkstyle` to ensure code style - [x] run `make test-convergence` to ensure convergence
1 parent c7111b4 commit da6e705

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

test/conftest.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,9 @@
33

44

55
@pytest.fixture(autouse=True)
6-
def clear_cuda_cache():
6+
def clear_gpu_cache():
77
yield
8-
torch.cuda.empty_cache()
8+
if torch.cuda.is_available():
9+
torch.cuda.empty_cache()
10+
elif torch.xpu.is_available():
11+
torch.xpu.empty_cache()

0 commit comments

Comments
 (0)