Skip to content

Releases: bitsandbytes-foundation/bitsandbytes

Latest `main` wheel

18 Nov 16:43
221b4b4

Choose a tag to compare

Latest `main` wheel Pre-release
Pre-release

Latest main pre-release wheel

This pre-release contains the latest development wheels for all supported platforms, rebuilt automatically on every commit to the main branch.

How to install:
Pick the correct command for your platform and run it in your terminal:

Linux (ARM/aarch64)

pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_aarch64.whl

Linux (x86_64)

pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-manylinux_2_24_x86_64.whl

Windows (x86_64)

pip install --force-reinstall https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-1.33.7.preview-py3-none-win_amd64.whl

Note:
These wheels are updated automatically with every commit to main and become available as soon as the python-package.yml workflow finishes.

The version number is replaced with 1.33.7-preview in order to keep the link stable, this however does not affect the installed version at all:

> pip install https://.../bitsandbytes-1.33.7-preview-py3-none-manylinux_2_24_x86_64.whl
Collecting bitsandbytes==1.33.7rc0
...
Successfully installed bitsandbytes-0.46.0.dev0

0.48.2

29 Oct 21:48

Choose a tag to compare

What's Changed

Full Changelog: 0.48.1...0.48.2

0.48.1

02 Oct 17:47

Choose a tag to compare

This release fixes a regression introduced in 0.48.0 related to LLM.int8(). This issue caused poor inference results with pre-quantized checkpoints in HF transformers.

What's Changed

Full Changelog: 0.48.0...0.48.1

0.48.0: Intel GPU & Gaudi support, CUDA 13, performance improvements, and more!

30 Sep 21:48

Choose a tag to compare

Highlights

🎉 Intel GPU Support

We now officially support Intel GPUs on Linux and Windows! Support is included for all major features (LLM.int8(), QLoRA, 8bit optimizers) with the exception of the paged optimizer feature.

This support includes the following hardware:

  • Intel® Arc™ B-Series Graphics
  • Intel® Arc™ A-Series Graphics
  • Intel® Data Center GPU Max Series

A compatible PyTorch version with Intel XPU support is required. The current minimum is PyTorch 2.6.0. It is recommended to use the latest stable release. See Getting Started on Intel GPU for guidance.

🎉 Intel Gaudi Support

We now officially support Intel Gaudi2 and Gaudi3 accelerators. This support includes LLM.int8() and QLoRA with the NF4 data type. At this time optimizers are not implemented.

A compatible PyTorch version with Intel Gaudi support is required. The current minimum is Gaudi v1.21 with PyTorch 2.6.0. It is recommended to use the latest stable release. See the Gaudi software installation guide for guidance.

NVIDIA CUDA

  • The 4bit dequantization kernel was improved by @Mhmd-Hisham in #1746. This change brings noticeable speed improvements for prefill, batch token generation, and training. The improvement is particularly prominent on A100, H100, and B200.
  • We've added CUDA 13.0 compatibility across Linux x86-64, Linux aarch64, and Windows x86-64 platforms.
    • Hardware support for CUDA 13.0 is limited to Turing generation and newer.
    • Support for Thor (SM110) is available in the Linux aarch64 build.

🚨 Breaking Changes

  • Dropped support for PyTorch 2.2. The new minimum requirement is 2.3.0.
  • Removed Maxwell GPU support for all CUDA builds.

What's Changed

New Contributors

Full Changelog: 0.47.0...0.48.0