You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/algorithms.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ This is an overview of the `bnb.functional` API in `bitsandbytes` that we think
5
5
6
6
## Using Int8 Matrix Multiplication
7
7
8
-
For straight Int8 matrix multiplication with mixed precision decomposition you can use ``bnb.matmul(...)``. To enable mixed precision decomposition, use the threshold parameter:
8
+
For straight Int8 matrix multiplication without mixed precision decomposition you can use ``bnb.matmul(...)``. To enable mixed precision decomposition, use the threshold parameter:
Copy file name to clipboardExpand all lines: docs/source/installation.mdx
+16-15Lines changed: 16 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,29 +19,30 @@ Welcome to the installation guide for the `bitsandbytes` library! This document
19
19
20
20
## CUDA[[cuda]]
21
21
22
-
`bitsandbytes` is currently only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's an ongoing multi-backend effort under development, which is currently in alpha. If you're interested in providing feedback or testing, check out [the multi-backend section below](#multi-backend).
22
+
`bitsandbytes` is currently only supported on CUDA GPUs for CUDA versions **11.0 - 12.6**. However, there's an ongoing multi-backend effort under development, which is currently in alpha. If you're interested in providing feedback or testing, check out [the multi-backend section below](#multi-backend).
23
23
24
24
### Supported CUDA Configurations[[cuda-pip]]
25
25
26
-
The latest version of `bitsandbytes`builds on the following configurations:
26
+
The latest version of the distributed `bitsandbytes`package is built with the following configurations:
> `bitsandbytes >= 0.39.1` no longer includes Kepler binaries in pip installations. This requires [manual compilation using](#cuda-compile) the `cuda11x_nomatmul_kepler` configuration.
43
-
44
-
To install from PyPI.
43
+
> `bitsandbytes >= 0.45.0` no longer supports Kepler GPUs.
44
+
>
45
+
> Support for Maxwell GPUs is deprecated and will be removed in a future release. For the best results, a Turing generation device or newer is recommended.
45
46
46
47
```bash
47
48
pip install bitsandbytes
@@ -79,7 +80,7 @@ For Linux and Windows systems, compiling from source allows you to customize the
79
80
<hfoptionsid="source">
80
81
<hfoptionid="Linux">
81
82
82
-
To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. Make sure you have a compiler installed to compile C++ (`gcc`, `make`, headers, etc.).
83
+
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. Make sure you have a compiler installed to compile C++ (`gcc`, `make`, headers, etc.).
83
84
84
85
For example, to install a compiler and CMake on Ubuntu:
85
86
@@ -115,7 +116,7 @@ pip install -e . # `-e` for "editable" install, when developing BNB (otherwise
115
116
116
117
Windows systems require Visual Studio with C++ support as well as an installation of the CUDA SDK.
117
118
118
-
To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA.
119
+
To compile from source, you need CMake >= **3.22.1** and Python >= **3.9** installed. You should also install CUDA Toolkit by following the [CUDA Installation Guide for Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html) guide from NVIDIA.
119
120
120
121
Refer to the following table if you're using another CUDA Toolkit version.
0 commit comments