Skip to content

Commit 6ec4f0c

Browse files
committed
Changed CUDA_INSTALL variable to BNB_CUDA_INSTALL.
1 parent 8cdec88 commit 6ec4f0c

File tree

4 files changed

+16
-17
lines changed

4 files changed

+16
-17
lines changed

CHANGELOG.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,12 +264,13 @@ Deprecated:
264264

265265
Features:
266266
- Added precompiled CUDA 11.8 binaries to support H100 GPUs without compilation #571
267-
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md.
267+
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md. Thank you @rapsealk
268268

269269
Bug fixes:
270270
- Fixed a bug where the default type of absmax was undefined which leads to errors if the default type is different than torch.float32. # 553
271271
- Fixed a missing scipy dependency in requirements.txt. #544
272272
- Fixed a bug, where a view operation could cause an error in 8-bit layers.
273+
- Fixed a bug where CPU bitsandbytes would during the import. #593 Thank you @bilelomrani
273274

274275
Documentation:
275276
- Improved documentation for GPUs that do not support 8-bit matmul. #529

bitsandbytes/cuda_setup/main.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -101,17 +101,17 @@ def initialize(self):
101101

102102
def manual_override(self):
103103
if torch.cuda.is_available():
104-
if 'CUDA_VERSION' in os.environ:
105-
if len(os.environ['CUDA_VERSION']) > 0:
104+
if 'BNB_CUDA_VERSION' in os.environ:
105+
if len(os.environ['BNB_CUDA_VERSION']) > 0:
106106
warn((f'\n\n{"="*80}\n'
107-
'WARNING: Manual override via CUDA_VERSION env variable detected!\n'
108-
'CUDA_VERSION=XXX can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.\n'
109-
'If this was unintended set the CUDA_VERSION variable to an empty string: export CUDA_VERSION=\n'
107+
'WARNING: Manual override via BNB_CUDA_VERSION env variable detected!\n'
108+
'BNB_CUDA_VERSION=XXX can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.\n'
109+
'If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=\n'
110110
'If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH\n'
111111
'For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64\n'
112-
f'Loading CUDA version: CUDA_VERSION={os.environ["CUDA_VERSION"]}'
112+
f'Loading CUDA version: BNB_CUDA_VERSION={os.environ["BNB_CUDA_VERSION"]}'
113113
f'\n{"="*80}\n\n'))
114-
self.binary_name = self.binary_name[:-6] + f'{os.environ["CUDA_VERSION"]}.so'
114+
self.binary_name = self.binary_name[:-6] + f'{os.environ["BNB_CUDA_VERSION"]}.so'
115115

116116
def run_cuda_setup(self):
117117
self.initialized = True
@@ -237,10 +237,10 @@ def warn_in_case_of_duplicates(results_paths: Set[Path]) -> None:
237237
f"Found duplicate {CUDA_RUNTIME_LIBS} files: {results_paths}.. "
238238
"We select the PyTorch default libcudart.so, which is {torch.version.cuda},"
239239
"but this might missmatch with the CUDA version that is needed for bitsandbytes."
240-
"To override this behavior set the CUDA_VERSION=<version string, e.g. 122> environmental variable"
240+
"To override this behavior set the BNB_CUDA_VERSION=<version string, e.g. 122> environmental variable"
241241
"For example, if you want to use the CUDA version 122"
242-
"CUDA_VERSION=122 python ..."
243-
"OR set the environmental variable in your .bashrc: export CUDA_VERSION=122"
242+
"BNB_CUDA_VERSION=122 python ..."
243+
"OR set the environmental variable in your .bashrc: export BNB_CUDA_VERSION=122"
244244
"In the case of a manual override, make sure you set the LD_LIBRARY_PATH, e.g."
245245
"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2")
246246
CUDASetup.get_instance().add_log_entry(warning_msg, is_warning=True)

how_to_use_nonpytorch_cuda.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,21 +24,19 @@ wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_instal
2424
bash cuda install 117 ~/local 1
2525
```
2626

27-
## Setting the environmental variables CUDA_HOME, CUDA_VERSION, and LD_LIBRARY_PATH
27+
## Setting the environmental variables BNB_CUDA_VERSION, and LD_LIBRARY_PATH
2828

2929
To manually override the PyTorch installed CUDA version you need to set to variable, like so:
3030

3131
```bash
32-
export CUDA_HOME=<PATH>
33-
export CUDA_VERSION=<VERSION>
32+
export BNB_CUDA_VERSION=<VERSION>
3433
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<PATH>
3534
```
3635

3736
For example, to use the local install path from above:
3837

3938
```bash
40-
export CUDA_HOME=/home/tim/local/cuda-11.7
41-
export CUDA_VERSION=117
39+
export BNB_CUDA_VERSION=117
4240
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/tim/local/cuda-11.7
4341
```
4442

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ def read(fname):
1818

1919
setup(
2020
name=f"bitsandbytes",
21-
version=f"0.40.1",
21+
version=f"0.40.1.post1",
2222
author="Tim Dettmers",
2323
author_email="[email protected]",
2424
description="k-bit optimizers and matrix multiplication routines.",

0 commit comments

Comments
 (0)