You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Support for Maxwell GPUs is deprecated and will be removed in a future release. For the best results, a Turing generation device or newer is recommended.
@@ -41,7 +41,8 @@ The currently distributed `bitsandbytes` packages are built with the following c
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations/
57
+
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.
57
58
58
59
For Linux and Windows systems, compiling from source allows you to customize the build configurations. See below for detailed platform-specific instructions (see the `CMakeLists.txt` if you want to check the specifics and explore some additional options):
59
60
@@ -68,7 +69,7 @@ For example, to install a compiler and CMake on Ubuntu:
68
69
apt-get install -y build-essential cmake
69
70
```
70
71
71
-
You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) guide from NVIDIA. The current minimum supported CUDA Toolkit version that we test with is **11.8**.
72
+
You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) guide. The current minimum supported CUDA Toolkit version that we test with is **11.8**.
@@ -78,12 +79,12 @@ pip install -e . # `-e` for "editable" install, when developing BNB (otherwise
78
79
```
79
80
80
81
> [!TIP]
81
-
> If you have multiple versions of CUDA installed or installed it in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.
82
+
> If you have multiple versions of the CUDA Toolkit installed or it is in a non-standard location, please refer to CMake CUDA documentation for how to configure the CUDA compiler.
82
83
83
84
</hfoption>
84
85
<hfoptionid="Windows">
85
86
86
-
Windows systems require Visual Studio with C++ support as well as an installation of the CUDA SDK.
87
+
Compilation from source on Windows systems require Visual Studio with C++ support as well as an installation of the CUDA Toolkit.
87
88
88
89
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA. The current minimum supported CUDA Toolkit version that we test with is **11.8**.
89
90
@@ -106,7 +107,7 @@ If you would like to use new features even before they are officially released a
106
107
<hfoptionsid="OS">
107
108
<hfoptionid="Linux">
108
109
109
-
```
110
+
```bash
110
111
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
> This functionality existed as an early technical preview and is not recommended for production use. We are in the process of upstreaming improved support for AMD and Intel hardware into the main project.
134
135
135
136
We provide an early preview of support for AMD and Intel hardware as part of a development branch.
@@ -149,7 +150,7 @@ For each supported backend, follow the respective instructions below:
149
150
150
151
To use this preview version of `bitsandbytes` with `transformers`, be sure to install:
@@ -243,7 +244,7 @@ It does not need compile CPP codes, all required ops are in [intel_extension_for
243
244
244
245
The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#cuda-compile).
pip install -e .# `-e` for "editable" install, when developing BNB (otherwise leave that out)
@@ -255,7 +256,7 @@ pip install -e . # `-e` for "editable" install, when developing BNB (otherwise
255
256
256
257
Please refer to [the official Ascend installations instructions](https://www.hiascend.com/document/detail/zh/Pytorch/60RC3/configandinstg/instg/insg_0001.html) for guidance on how to install the necessary `torch_npu` dependency.
257
258
258
-
```
259
+
```bash
259
260
# Install bitsandbytes from source
260
261
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
0 commit comments