Skip to content

Commit 8a6e9a8

Browse files
cyyeverjaneyx99
authored andcommitted
Let PYTORCH_NO_CUDA_MEMORY_CACHING has effect only when value is 1 (pytorch#145905)
Fixes pytorch#145661 Pull Request resolved: pytorch#145905 Approved by: https://github.com/eqy, https://github.com/janeyx99 Co-authored-by: Jane (Yuan) Xu <[email protected]>
1 parent 58cc669 commit 8a6e9a8

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

c10/cuda/CUDACachingAllocator.cpp

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3262,10 +3262,10 @@ class DeviceCachingAllocator {
32623262
static bool forceUncachedAllocator() {
32633263
// Allow either CUDA or HIP name for env var for maximum user comfort
32643264
// the CUDA env var avoids being hipified in cuda_to_hip_mappings.py
3265-
static bool has_cuda_env =
3266-
c10::utils::has_env("PYTORCH_NO_CUDA_MEMORY_CACHING");
3267-
static bool has_rocm_env =
3268-
c10::utils::has_env("PYTORCH_NO_HIP_MEMORY_CACHING");
3265+
static auto has_cuda_env =
3266+
c10::utils::check_env("PYTORCH_NO_CUDA_MEMORY_CACHING") == true;
3267+
static auto has_rocm_env =
3268+
c10::utils::check_env("PYTORCH_NO_HIP_MEMORY_CACHING") == true;
32693269
static bool force_uncached = has_cuda_env || has_rocm_env;
32703270
return force_uncached;
32713271
}

0 commit comments

Comments
 (0)