Skip to content

Commit 90f5419

Browse files
Documentation updates for v0.48.0 (#1770)
* Update installation docs * Update links * Fix cuda min glibc in doc * Update header levels * Update AMD section * typo
1 parent b8d1c26 commit 90f5419

File tree

1 file changed

+101
-121
lines changed

1 file changed

+101
-121
lines changed

docs/source/installation.mdx

Lines changed: 101 additions & 121 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,34 @@
11
# Installation Guide
22

3-
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations. The library primarily supports CUDA-based GPUs, but the team is actively working on enabling support for additional backends like CPU, AMD ROCm, Intel XPU, and Gaudi HPU.
3+
Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations.
4+
5+
We provide official support for NVIDIA GPUs, CPUs, Intel XPUs, and Intel Gaudi platforms. We also have experimental support for
6+
additional platforms such as AMD ROCm.
47

58
## Table of Contents
69

7-
- [CUDA](#cuda)
10+
- [System Requirements](#requirements)
11+
- [NVIDIA CUDA](#cuda)
812
- [Installation via PyPI](#cuda-pip)
913
- [Compile from Source](#cuda-compile)
10-
- [Preview Wheels from `main`](#cuda-preview)
11-
- [Multi-Backend Preview](#multi-backend)
12-
- [Supported Backends](#multi-backend-supported-backends)
13-
- [Pre-requisites](#multi-backend-pre-requisites)
14-
- [Installation](#multi-backend-pip)
15-
- [Compile from Source](#multi-backend-compile)
14+
- [Intel XPU](#xpu)
15+
- [Installation via PyPI](#xpu-pip)
16+
- [Intel Gaudi](#gaudi)
17+
- [Installation via PyPI](#gaudi-pip)
18+
- [CPU](#cpu)
19+
- [Installation via PyPI](#cpu-pip)
20+
- [Compile from Source](#cpu-compile)
21+
- [AMD ROCm (Preview)](#rocm-preview)
22+
- [Preview Wheels](#preview-wheels)
23+
24+
## System Requirements[[requirements]]
25+
26+
These are the minimum requirements for `bitsandbytes` across all platforms. Please be aware that some compute platforms may impose more strict requirements.
1627

17-
## CUDA[[cuda]]
28+
* Python >= 3.9
29+
* PyTorch >= 2.3
30+
31+
## NVIDIA CUDA[[cuda]]
1832

1933
`bitsandbytes` is currently supported on NVIDIA GPUs with [Compute Capability](https://developer.nvidia.com/cuda-gpus) 6.0+.
2034
The library can be built using CUDA Toolkit versions as old as **11.8**.
@@ -25,11 +39,13 @@ The library can be built using CUDA Toolkit versions as old as **11.8**.
2539
| 8-bit optimizers/quantization | 6.0+ | Pascal (GTX 10X0 series, P100) or newer GPUs|
2640
| NF4/FP4 quantization | 6.0+ | Pascal (GTX 10X0 series, P100) or newer GPUs|
2741

42+
2843
> [!WARNING]
2944
> Support for Maxwell GPUs is deprecated and will be removed in a future release.
3045
> Maxwell support is not included in PyPI distributions from `v0.48.0` on and must be built from source.
3146
> For the best results, a Turing generation device or newer is recommended.
3247
48+
3349
### Installation via PyPI[[cuda-pip]]
3450

3551
This is the most straightforward and recommended installation option.
@@ -40,20 +56,22 @@ The currently distributed `bitsandbytes` packages are built with the following c
4056
|--------------------|------------------|----------------------|--------------
4157
| **Linux x86-64** | 11.8 - 12.6 | GCC 11.2 | sm60, sm70, sm75, sm80, sm86, sm89, sm90
4258
| **Linux x86-64** | 12.8 - 12.9 | GCC 11.2 | sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
43-
| **Linux x86-64** | 13.0 | GCC 11.2 | sm75, sm80, sm86, sm89, sm90, sm100, sm120
59+
| **Linux x86-64** | 13.0 | GCC 11.2 | sm75, sm80, sm86, sm89, sm90, sm100, sm110, sm120
4460
| **Linux aarch64** | 11.8 - 12.6 | GCC 11.2 | sm75, sm80, sm90
4561
| **Linux aarch64** | 12.8 - 13.0 | GCC 11.2 | sm75, sm80, sm90, sm100, sm120
4662
| **Windows x86-64** | 11.8 - 12.6 | MSVC 19.43+ (VS2022) | sm50, sm60, sm75, sm80, sm86, sm89, sm90
4763
| **Windows x86-64** | 12.8 - 12.9 | MSVC 19.43+ (VS2022) | sm70, sm75, sm80, sm86, sm89, sm90, sm100, sm120
4864
| **Windows x86-64** | 13.0 | MSVC 19.43+ (VS2022) | sm75, sm80, sm86, sm89, sm90, sm100, sm120
4965

50-
Use `pip` or `uv` to install:
66+
The Linux build has a minimum glibc version of 2.24.
67+
68+
Use `pip` or `uv` to install the latest release:
5169

5270
```bash
5371
pip install bitsandbytes
5472
```
5573

56-
### Compile from source[[cuda-compile]]
74+
### Compile from Source[[cuda-compile]]
5775

5876
> [!TIP]
5977
> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA Toolkit versions or Linux distributions, or other less common configurations.
@@ -102,131 +120,102 @@ Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com
102120
</hfoption>
103121
</hfoptions>
104122

105-
### Preview Wheels from `main`[[cuda-preview]]
123+
## Intel XPU[[xpu]]
106124

107-
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
125+
* A compatible PyTorch version with Intel XPU support is required. The current minimum is **PyTorch 2.6.0**. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
108126

109-
<hfoptions id="OS">
110-
<hfoption id="Linux">
127+
### Installation via PyPI[[xpu-pip]]
111128

112-
```bash
113-
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
129+
This is the most straightforward and recommended installation option.
114130

115-
# x86_64 (most users)
116-
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl
131+
The currently distributed `bitsandbytes` packages are built with the following configurations:
117132

118-
# ARM/aarch64
119-
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl
120-
```
133+
| **OS** | **oneAPI Toolkit** | **Kernel Implementation** |
134+
|--------------------|------------------|----------------------|
135+
| **Linux x86-64** | 2025.1.3 | SYCL + Triton |
136+
| **Windows x86-64** | N/A | SYCL |
121137

122-
</hfoption>
123-
<hfoption id="Windows">
138+
The Linux build has a minimum glibc version of 2.34.
139+
140+
Use `pip` or `uv` to install the latest release:
124141

125142
```bash
126-
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
127-
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-win_amd64.whl
143+
pip install bitsandbytes
128144
```
129-
</hfoption>
130-
</hfoptions>
131145

146+
## Intel Gaudi[[gaudi]]
132147

133-
## Multi-Backend Preview[[multi-backend]]
148+
* A compatible PyTorch version with Intel Gaudi support is required. The current minimum is **Gaudi v1.21** with **PyTorch 2.6.0**. It is recommended to use the latest stable release. See the Gaudi software [installation guide](https://docs.habana.ai/en/latest/Installation_Guide/index.html) for guidance.
134149

135-
> [!WARNING]
136-
> This functionality existed as an early technical preview and is not recommended for production use. We are in the process of upstreaming improved support for AMD and Intel hardware into the main project.
137150

138-
We provide an early preview of support for AMD and Intel hardware as part of a development branch.
151+
### Installation from PyPI[[gaudi-pip]]
139152

140-
### Supported Backends[[multi-backend-supported-backends]]
141-
142-
| **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** |
143-
|-------------|------------------------|---------------------------|-------------------------|------------|
144-
| **AMD ROCm** | 6.1+ | 3.10+ | minimum CDNA - `gfx90a`, RDNA - `gfx1100` | Alpha |
145-
| **Intel CPU** | v2.4.0+ | 3.10+ | Intel CPU | Alpha |
146-
| **Intel GPU** | v2.7.0+ | 3.10+ | Intel GPU | Experimental |
147-
| **Ascend NPU** | 2.1.0+ (`torch_npu`) | 3.10+ | Ascend NPU | Experimental |
148-
149-
For each supported backend, follow the respective instructions below:
150-
151-
### Pre-requisites[[multi-backend-pre-requisites]]
152-
153-
To use this preview version of `bitsandbytes` with `transformers`, be sure to install:
153+
Use `pip` or `uv` to install the latest release:
154154

155155
```bash
156-
pip install "transformers>=4.45.1"
156+
pip install bitsandbytes
157157
```
158158

159-
<hfoptions id="backend">
160-
<hfoption id="AMD ROCm">
159+
## CPU[[cpu]]
161160

162-
> [!WARNING]
163-
> Pre-compiled binaries are only built for ROCm versions `6.1.2`/`6.2.4`/`6.3.2` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip).
164-
>
165-
> Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile).
166-
>
167-
> **Windows is not supported for the ROCm backend**
161+
### Installation from PyPI[[cpu-pip]]
168162

169-
> [!TIP]
170-
> If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`!
163+
This is the most straightforward and recommended installation option.
171164

172-
```bash
173-
# Create a docker container with the ROCm image, which includes ROCm libraries
174-
docker pull rocm/dev-ubuntu-22.04:6.3.4-complete
175-
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/dev-ubuntu-22.04:6.3.4-complete
176-
apt-get update && apt-get install -y git && cd home
165+
The currently distributed `bitsandbytes` packages are built with the following configurations:
177166

178-
# Install pytorch compatible with above ROCm version
179-
pip install torch --index-url https://download.pytorch.org/whl/rocm6.3/
180-
```
167+
| **OS** | **Host Compiler** | Hardware Minimum
168+
|--------------------|----------------------|----------------------|
169+
| **Linux x86-64** | GCC 11.4 | AVX2 |
170+
| **Linux aarch64** | GCC 11.4 | |
171+
| **Windows x86-64** | MSVC 19.43+ (VS2022) | AVX2 |
181172

182-
</hfoption>
183-
<hfoption id="Intel XPU">
173+
The Linux build has a minimum glibc version of 2.24.
184174

185-
* A compatible PyTorch version with Intel XPU support is required. It is recommended to use the latest stable release. See [Getting Started on Intel GPU](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html) for guidance.
175+
Use `pip` or `uv` to install the latest release:
186176

187-
</hfoption>
188-
</hfoptions>
177+
```bash
178+
pip install bitsandbytes
179+
```
189180

190-
### Installation
181+
### Compile from Source[[cpu-compile]]
191182

192-
You can install the pre-built wheels for each backend, or compile from source for custom configurations.
183+
To compile from source, simply install the package from source using `pip`. The package will be built for CPU only at this time.
193184

194-
#### Pre-built Wheel Installation (recommended)[[multi-backend-pip]]
185+
```bash
186+
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
187+
pip install -e .
188+
```
195189

196-
<hfoptions id="platform">
197-
<hfoption id="Linux">
198-
This wheel provides support for ROCm and Intel XPU platforms.
190+
## AMD ROCm (Preview)[[rocm]]
199191

200-
```
201-
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
202-
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl'
203-
```
192+
* A compatible PyTorch version with AMD ROCm support is required. It is recommended to use the latest stable release. See [PyTorch on ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html) for guidance.
193+
* ROCm support is currently only available in our preview wheels or when building from source.
204194

205-
</hfoption>
206-
<hfoption id="Windows">
207-
This wheel provides support for the Intel XPU platform.
195+
### Preview Wheels from `main`[[rocm-preview]]
208196

209-
```bash
210-
# Note, if you don't want to reinstall our dependencies, append the `--no-deps` flag!
211-
pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-win_amd64.whl'
212-
```
197+
The currently distributed preview `bitsandbytes` are built with the following configurations:
213198

214-
</hfoption>
215-
</hfoptions>
199+
| **OS** | **ROCm** | **Targets**
200+
|--------------------|----------|---------------------------|
201+
| **Linux x86-64** | 6.1.2 | gfx90a / gfx942 / gfx1100
202+
| **Linux x86-64** | 6.2.4 | gfx90a / gfx942 / gfx1100
203+
| **Linux x86-64** | 6.3.4 | gfx90a / gfx942 / gfx1100
204+
| **Linux x86-64** | 6.4.4 | gfx90a / gfx942 / gfx1100
205+
| **Linux x86-64** | 7.0.0 | gfx90a / gfx942 / gfx1100
216206

217-
#### Compile from Source[[multi-backend-compile]]
207+
**Windows is not currently supported.**
218208

219-
<hfoptions id="backend">
220-
<hfoption id="AMD ROCm">
209+
Please see [Preview Wheels](#preview-wheels) for installation instructions.
221210

222-
#### AMD GPU
211+
### Compile from Source[[rocm-compile]]
223212

224-
bitsandbytes is supported from ROCm 6.1 - ROCm 6.4.
213+
bitsandbytes can be compiled from ROCm 6.1 - ROCm 7.0.
225214

226215
```bash
227216
# Install bitsandbytes from source
228-
# Clone bitsandbytes repo, ROCm backend is currently enabled on multi-backend-refactor branch
229-
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
217+
# Clone bitsandbytes repo
218+
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
230219

231220
# Compile & install
232221
apt-get install -y build-essential cmake # install build tools dependencies, unless present
@@ -235,38 +224,29 @@ make
235224
pip install -e . # `-e` for "editable" install, when developing BNB (otherwise leave that out)
236225
```
237226

238-
</hfoption>
239-
<hfoption id="Intel CPU + GPU">
227+
## Preview Wheels[[preview-wheels]]
240228

241-
#### Intel CPU + GPU(XPU)
229+
If you would like to use new features even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*):
242230

243-
CPU needs to build CPU C++ codes, while XPU needs to build sycl codes.
244-
Run `export bnb_device=xpu` if you are using xpu, run `export bnb_device=cpu` if you are using cpu.
245-
```
246-
git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
247-
cmake -DCOMPUTE_BACKEND=$bnb_device -S .
248-
make
249-
pip install -e .
250-
```
231+
<hfoptions id="OS">
232+
<hfoption id="Linux">
251233

234+
```bash
235+
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
252236

253-
</hfoption>
254-
<hfoption id="Ascend NPU">
237+
# x86_64 (most users)
238+
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl
255239

256-
#### Ascend NPU
240+
# ARM/aarch64
241+
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl
242+
```
257243

258-
Please refer to [the official Ascend installations instructions](https://www.hiascend.com/document/detail/zh/Pytorch/60RC3/configandinstg/instg/insg_0001.html) for guidance on how to install the necessary `torch_npu` dependency.
244+
</hfoption>
245+
<hfoption id="Windows">
259246

260247
```bash
261-
# Install bitsandbytes from source
262-
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
263-
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/
264-
265-
# Compile & install
266-
apt-get install -y build-essential cmake # install build tools dependencies, unless present
267-
cmake -DCOMPUTE_BACKEND=npu -S .
268-
make
269-
pip install -e . # `-e` for "editable" install, when developing BNB (otherwise leave that out)
248+
# Note: if you don't want to reinstall our dependencies, append the `--no-deps` flag!
249+
pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-win_amd64.whl
270250
```
271251
</hfoption>
272252
</hfoptions>

0 commit comments

Comments
 (0)