Skip to content

Commit 164858a

Browse files
rraminenBLOrange-AMD
authored andcommitted
Fix setting of memory fraction in test_garbage_collect_expandable (pytorch#164000)
Fixes pytorch#160598 Fixes pytorch#160551 Fixes pytorch#160507 This PR fixes a bug in the `test_garbage_collect_expandable` unit test where the finally block incorrectly re-reads the current per process memory fraction instead of setting the original value. With out the fix the other tests in the `test/test_cuda.py` test suite were impacted and failed with OOM error on ROCm. This ensures proper cleanup and isolation of test state, maintaining test correctness and avoiding side effects like the below OOM error that it caused. For example, `test_autocast_checkpointing` failed with the below error https://github.com/pytorch/pytorch/actions/runs/17982223758/job/51153974194 on ROCm `torch.OutOfMemoryError: HIP out of memory. Tried to allocate 76.00 MiB. GPU 0 has a total capacity of 255.69 GiB of which 252.97 GiB is free. 1.20 GiB allowed; Of the allocated memory 1.14 GiB is allocated by PyTorch, with 17.00 MiB allocated in private pools (e.g., HIP Graphs), and 18.63 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)` Pull Request resolved: pytorch#164000 Approved by: https://github.com/jeffdaily
1 parent 2cd73af commit 164858a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

test/test_cuda.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4305,7 +4305,7 @@ def alloc(n):
43054305
# expandable_segment blocks can be in the free list when this is called.
43064306
alloc(80)
43074307
finally:
4308-
orig = torch.cuda.get_per_process_memory_fraction(0)
4308+
torch.cuda.memory.set_per_process_memory_fraction(orig)
43094309

43104310
def test_allocator_settings(self):
43114311
def power2_div(size, div_factor):

0 commit comments

Comments
 (0)