You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations. The library primarily supports CUDA-based GPUs, but the team is actively working on enabling support for additional backends like CPU, AMD ROCm, Intel XPU, and Gaudi HPU.
3
+
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations.
4
+
5
+
We provide official support for NVIDIA GPUs, CPUs, Intel XPUs, and Intel Gaudi platforms. We also have experimental support for
These are the minimum requirements for `bitsandbytes` across all platforms. Please be aware that some compute platforms may impose more strict requirements.
16
27
17
-
## CUDA[[cuda]]
28
+
* Python >= 3.9
29
+
* PyTorch >= 2.3
30
+
31
+
## NVIDIA CUDA[[cuda]]
18
32
19
33
`bitsandbytes` is currently supported on NVIDIA GPUs with [Compute Capability](https://developer.nvidia.com/cuda-gpus) 6.0+.
20
34
The library can be built using CUDA Toolkit versions as old as **11.8**.
@@ -25,11 +39,13 @@ The library can be built using CUDA Toolkit versions as old as **11.8**.
The Linux build has a minimum glibc version of 2.24.
67
+
68
+
Use `pip` or `uv` to install the latest release:
51
69
52
70
```bash
53
71
pip install bitsandbytes
54
72
```
55
73
56
-
### Compile from source[[cuda-compile]]
74
+
### Compile from Source[[cuda-compile]]
57
75
58
76
> [!TIP]
59
77
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.
@@ -102,131 +120,102 @@ Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com
102
120
</hfoption>
103
121
</hfoptions>
104
122
105
-
### Preview Wheels from `main`[[cuda-preview]]
123
+
##Intel XPU[[xpu]]
106
124
107
-
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
125
+
* A compatible PyTorch version with Intel XPU support is required. The current minimum is **PyTorch 2.6.0**. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
108
126
109
-
<hfoptionsid="OS">
110
-
<hfoptionid="Linux">
127
+
### Installation via PyPI[[xpu-pip]]
111
128
112
-
```bash
113
-
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
129
+
This is the most straightforward and recommended installation option.
* A compatible PyTorch version with Intel Gaudi support is required. The current minimum is **Gaudi v1.21** with **PyTorch 2.6.0**. It is recommended to use the latest stable release. See the Gaudi software [installation guide](https://docs.habana.ai/en/latest/Installation_Guide/index.html) for guidance.
134
149
135
-
> [!WARNING]
136
-
> This functionality existed as an early technical preview and is not recommended for production use. We are in the process of upstreaming improved support for AMD and Intel hardware into the main project.
137
150
138
-
We provide an early preview of support for AMD and Intel hardware as part of a development branch.
To use this preview version of `bitsandbytes` with `transformers`, be sure to install:
153
+
Use `pip` or `uv` to install the latest release:
154
154
155
155
```bash
156
-
pip install "transformers>=4.45.1"
156
+
pip install bitsandbytes
157
157
```
158
158
159
-
<hfoptionsid="backend">
160
-
<hfoptionid="AMD ROCm">
159
+
## CPU[[cpu]]
161
160
162
-
> [!WARNING]
163
-
> Pre-compiled binaries are only built for ROCm versions `6.1.2`/`6.2.4`/`6.3.2` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip).
164
-
>
165
-
> Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile).
166
-
>
167
-
> **Windows is not supported for the ROCm backend**
161
+
### Installation from PyPI[[cpu-pip]]
168
162
169
-
> [!TIP]
170
-
> If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`!
163
+
This is the most straightforward and recommended installation option.
171
164
172
-
```bash
173
-
# Create a docker container with the ROCm image, which includes ROCm libraries
174
-
docker pull rocm/dev-ubuntu-22.04:6.3.4-complete
175
-
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/dev-ubuntu-22.04:6.3.4-complete
176
-
apt-get update && apt-get install -y git &&cd home
165
+
The currently distributed `bitsandbytes` packages are built with the following configurations:
177
166
178
-
# Install pytorch compatible with above ROCm version
The Linux build has a minimum glibc version of 2.24.
184
174
185
-
* A compatible PyTorch version with Intel XPU support is required. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
175
+
Use `pip` or `uv`to install the latest release:
186
176
187
-
</hfoption>
188
-
</hfoptions>
177
+
```bash
178
+
pip install bitsandbytes
179
+
```
189
180
190
-
### Installation
181
+
### Compile from Source[[cpu-compile]]
191
182
192
-
You can install the pre-built wheels for each backend, or compile from source for custom configurations.
183
+
To compile from source, simply install the package from source using `pip`. The package will be built for CPU only at this time.
* A compatible PyTorch version with AMD ROCm support is required. It is recommended to use the latest stable release. See [PyTorch on ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html) for guidance.
193
+
* ROCm support is currently only available in our preview wheels or when building from source.
204
194
205
-
</hfoption>
206
-
<hfoptionid="Windows">
207
-
This wheel provides support for the Intel XPU platform.
195
+
### Preview Wheels from `main`[[rocm-preview]]
208
196
209
-
```bash
210
-
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
pip install -e .# `-e` for "editable" install, when developing BNB (otherwise leave that out)
236
225
```
237
226
238
-
</hfoption>
239
-
<hfoptionid="Intel CPU + GPU">
227
+
## Preview Wheels[[preview-wheels]]
240
228
241
-
#### Intel CPU + GPU(XPU)
229
+
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
242
230
243
-
CPU needs to build CPU C++ codes, while XPU needs to build sycl codes.
244
-
Run `export bnb_device=xpu` if you are using xpu, run `export bnb_device=cpu` if you are using cpu.
245
-
```
246
-
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
247
-
cmake -DCOMPUTE_BACKEND=$bnb_device -S .
248
-
make
249
-
pip install -e .
250
-
```
231
+
<hfoptionsid="OS">
232
+
<hfoptionid="Linux">
251
233
234
+
```bash
235
+
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
Please refer to [the official Ascend installations instructions](https://www.hiascend.com/document/detail/zh/Pytorch/60RC3/configandinstg/instg/insg_0001.html) for guidance on how to install the necessary `torch_npu` dependency.
244
+
</hfoption>
245
+
<hfoptionid="Windows">
259
246
260
247
```bash
261
-
# Install bitsandbytes from source
262
-
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
0 commit comments