Skip to content

Commit bad6833

Browse files
Refresh content for README.md (#1620)
* Refresh content for README.md * Update accelerator support chart * Add HPU to accelerator table in README * update readme for intel XPU
1 parent 27d916b commit bad6833

File tree

1 file changed

+175
-7
lines changed

1 file changed

+175
-7
lines changed

README.md

Lines changed: 175 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,187 @@
1-
# `bitsandbytes`
1+
<p align="center"><img src="https://avatars.githubusercontent.com/u/175231607?s=200&v=4" alt=""></p>
2+
<h1 align="center">bitsandbytes</h1>
3+
<p align="center">
4+
<a href="https://github.com/bitsandbytes-foundation/bitsandbytes/main/LICENSE">
5+
<img alt="License" src="https://img.shields.io/github/license/bitsandbytes-foundation/bitsandbytes.svg?color=blue">
6+
</a>
7+
<a href="https://pepy.tech/project/bitsandbytes">
8+
<img alt="Downloads" src="https://static.pepy.tech/badge/bitsandbytes/month">
9+
</a>
10+
<a href="https://github.com/bitsandbytes-foundation/bitsandbytes/actions/workflows/tests.yml">
11+
<img alt="Nightly Unit Tests" src="https://img.shields.io/github/actions/workflow/status/bitsandbytes-foundation/bitsandbytes/tests.yml?logo=github&label=Nightly%20Tests">
12+
</a>
13+
<a href="https://github.com/bitsandbytes-foundation/bitsandbytes/releases">
14+
<img alt="GitHub Release" src="https://img.shields.io/github/v/release/bitsandbytes-foundation/bitsandbytes">
15+
</a>
16+
<a href="https://pypi.org/project/bitsandbytes/">
17+
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/bitsandbytes">
18+
</a>
19+
</p>
220

3-
[![Downloads](https://static.pepy.tech/badge/bitsandbytes)](https://pepy.tech/project/bitsandbytes) [![Downloads](https://static.pepy.tech/badge/bitsandbytes/month)](https://pepy.tech/project/bitsandbytes) [![Downloads](https://static.pepy.tech/badge/bitsandbytes/week)](https://pepy.tech/project/bitsandbytes)
21+
`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:
422

5-
The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
23+
* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
24+
* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
25+
* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.
626

727
The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
828

9-
There are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon, hopefully NPU.
29+
## System Requirements
30+
bitsandbytes has the following minimum requirements for all platforms:
1031

11-
**Please head to the official documentation page:**
32+
* Python 3.9+
33+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.2+
34+
* _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._
1235

13-
**[https://huggingface.co/docs/bitsandbytes/main](https://huggingface.co/docs/bitsandbytes/main)**
36+
#### Accelerator support:
1437

15-
## License
38+
<table>
39+
<thead>
40+
<tr>
41+
<th>Platform</th>
42+
<th>Accelerator</th>
43+
<th>Hardware Requirements</th>
44+
<th>Support Status</th>
45+
</tr>
46+
</thead>
47+
<tbody>
48+
<tr>
49+
<td colspan="4">🐧 <strong>Linux</strong></td>
50+
</tr>
51+
<tr>
52+
<td align="right">x86-64</td>
53+
<td>◻️ CPU</td>
54+
<td></td>
55+
<td>〰️ Partial Support</td>
56+
</tr>
57+
<tr>
58+
<td></td>
59+
<td>🟩 NVIDIA GPU</td>
60+
<td>SM50+ minimum<br>SM75+ recommended</td>
61+
<td>✅ Full Support *</td>
62+
</tr>
63+
<tr>
64+
<td></td>
65+
<td>🟥 AMD GPU</td>
66+
<td>gfx90a, gfx942, gfx1100</td>
67+
<td>🚧 In Development</td>
68+
</tr>
69+
<tr>
70+
<td></td>
71+
<td>🟦 Intel XPU</td>
72+
<td>
73+
Data Center GPU Max Series (Ponte Vecchio) <br>
74+
Arc A-Series (Alchemist) <br>
75+
Arc B-Series (Battlemage)
76+
</td>
77+
<td>🚧 In Development</td>
78+
</tr>
79+
<!--
80+
<tr>
81+
<td></td>
82+
<td>🟦 Intel HPU</td>
83+
<td>Gaudi1, Gaudi2, Gaudi3</td>
84+
<td>🚧</td>
85+
</tr>
86+
--->
87+
<tr>
88+
<td align="right">aarch64</td>
89+
<td>◻️ CPU</td>
90+
<td></td>
91+
<td>〰️ Partial Support</td>
92+
</tr>
93+
<tr>
94+
<td></td>
95+
<td>🟩 NVIDIA GPU</td>
96+
<td>SM75, SM80, SM90, SM100</td>
97+
<td>✅ Full Support *</td>
98+
</tr>
99+
<tr>
100+
<td colspan="4">🪟 <strong>Windows</strong></td>
101+
</tr>
102+
<tr>
103+
<td align="right">x86-64</td>
104+
<td>◻️ CPU</td>
105+
<td>AVX2</td>
106+
<td>〰️ Partial Support</td>
107+
</tr>
108+
<tr>
109+
<td></td>
110+
<td>🟩 NVIDIA GPU</td>
111+
<td>SM50+ minimum<br>SM75+ recommended</td>
112+
<td>✅ Full Support *</td>
113+
</tr>
114+
<tr>
115+
<td></td>
116+
<td>🟦 Intel XPU</td>
117+
<td>
118+
Arc A-Series (Alchemist) <br>
119+
Arc B-Series (Battlemage)
120+
</td>
121+
<td>🚧 In Development</td>
122+
</tr>
123+
<tr>
124+
<td colspan="4">🍎 <strong>macOS</strong></td>
125+
</tr>
126+
<tr>
127+
<td align="right">arm64</td>
128+
<td>◻️ CPU / Metal</td>
129+
<td>Apple M1+</td>
130+
<td>❌ Under consideration</td>
131+
</tr>
132+
</tbody>
133+
</table>
134+
135+
\* Accelerated INT8 requires SM75+.
136+
137+
## :book: Documentation
138+
* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
139+
* 🤗 [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)
140+
* 🤗 [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)
141+
* 🤗 [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)
142+
143+
## :heart: Sponsors
144+
The continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.
16145

146+
<a href="https://hf.co" target="_blank"><img width="100" src="https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.svg" alt="Hugging Face"></a>
147+
148+
## License
17149
`bitsandbytes` is MIT licensed.
18150

19151
We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.
152+
153+
## How to cite us
154+
If you found this library useful, please consider citing our work:
155+
156+
### QLoRA
157+
158+
```bibtex
159+
@article{dettmers2023qlora,
160+
title={Qlora: Efficient finetuning of quantized llms},
161+
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
162+
journal={arXiv preprint arXiv:2305.14314},
163+
year={2023}
164+
}
165+
```
166+
167+
### LLM.int8()
168+
169+
```bibtex
170+
@article{dettmers2022llmint8,
171+
title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
172+
author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
173+
journal={arXiv preprint arXiv:2208.07339},
174+
year={2022}
175+
}
176+
```
177+
178+
### 8-bit Optimizers
179+
180+
```bibtex
181+
@article{dettmers2022optimizers,
182+
title={8-bit Optimizers via Block-wise Quantization},
183+
author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
184+
journal={9th International Conference on Learning Representations, ICLR},
185+
year={2022}
186+
}
187+
```

0 commit comments

Comments
 (0)