Skip to content

Commit 8c9ef4c

Browse files
committed
Make CUDA and GPU checks optional, to enable compilation on non-GPU systems
Signed-off-by: Santi Villalba <sdvillal@gmail.com>
1 parent da854d7 commit 8c9ef4c

File tree

2 files changed

+15
-12
lines changed

2 files changed

+15
-12
lines changed

docs/_tutorials/ds4sci_evoformerattention.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ export CUTLASS_PATH=/path/to/cutlass
2727
The kernels will be compiled when `DS4Sci_EvoformerAttention` is called for the first time.
2828

2929
`DS4Sci_EvoformerAttention` requires GPUs with compute capability 7.0 or higher (NVIDIA V100 or later GPUs) and the minimal CUDA version is 11.3. It is recommended to use CUDA 11.7 or later for better performance. Besides, the performance of backward kernel on V100 kernel is not as good as that on A100 for now.
30+
The extension checks both requirements and fails if any is not met. To disable the check, for example for cross-compiling in a system without GPUs, you can set the environment variable ```DS_IGNORE_CUDA_DETECTION=TRUE```.
3031

3132
### 3.2 Unit test and benchmark
3233

op_builder/evoformer_attn.py

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -57,19 +57,21 @@ def is_compatible(self, verbose=False):
5757
self.warning("Please use CUTLASS version >= 3.1.0")
5858
return False
5959

60+
# Check CUDA and GPU capabilities
6061
cuda_okay = True
61-
if not self.is_rocm_pytorch() and torch.cuda.is_available(): #ignore-cuda
62-
sys_cuda_major, _ = installed_cuda_version()
63-
torch_cuda_major = int(torch.version.cuda.split('.')[0])
64-
cuda_capability = torch.cuda.get_device_properties(0).major #ignore-cuda
65-
if cuda_capability < 7:
66-
if verbose:
67-
self.warning("Please use a GPU with compute capability >= 7.0")
68-
cuda_okay = False
69-
if torch_cuda_major < 11 or sys_cuda_major < 11:
70-
if verbose:
71-
self.warning("Please use CUDA 11+")
72-
cuda_okay = False
62+
if not os.environ.get("DS_IGNORE_CUDA_DETECTION"):
63+
if not self.is_rocm_pytorch() and torch.cuda.is_available(): #ignore-cuda
64+
sys_cuda_major, _ = installed_cuda_version()
65+
torch_cuda_major = int(torch.version.cuda.split('.')[0])
66+
cuda_capability = torch.cuda.get_device_properties(0).major #ignore-cuda
67+
if cuda_capability < 7:
68+
if verbose:
69+
self.warning("Please use a GPU with compute capability >= 7.0")
70+
cuda_okay = False
71+
if torch_cuda_major < 11 or sys_cuda_major < 11:
72+
if verbose:
73+
self.warning("Please use CUDA 11+")
74+
cuda_okay = False
7375
return super().is_compatible(verbose) and cuda_okay
7476

7577
def include_paths(self):

0 commit comments

Comments
 (0)