Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented May 28, 2025

This PR contains the following updates:

Package Change Age Confidence
bitsandbytes (changelog) ^0.45.3^0.49.0 age confidence

Release Notes

bitsandbytes-foundation/bitsandbytes (bitsandbytes)

v0.49.1

Compare Source

What's Changed

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.49.0...0.49.1

v0.49.0

Compare Source

Highlights

x86-64 CPU Improvements

CPU performance for 4bit is significantly improved on x86-64, with optimized kernel paths for CPUs that have AVX512 or AVX512BF16 support.

AMD ROCm Experimental Wheels
  • Experimental support for AMD devices is now included in our PyPI wheels on Linux x86-64.
  • We've added additional GPU target devices as outlined in our docs.
  • Support for using the default blocksize of 64 for 4bit was added for RDNA GPUs in #​1748.
macOS 14+ Wheels
  • We're now publishing wheels for macOS 14+!
  • The 4bit and 8bit quantization features are supported on MPS by slow implementations. We plan to enable Metal kernels with improved performance in the future.
🚨 Breaking Changes
  • Dropped support for Python 3.9.
  • Dropped compilation support for Maxwell GPUs in the CUDA backend.

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.2...0.49.0

v0.48.2

Compare Source

What's Changed

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.1...0.48.2

v0.48.1

Compare Source

This release fixes a regression introduced in 0.48.0 related to LLM.int8(). This issue caused poor inference results with pre-quantized checkpoints in HF transformers.

What's Changed

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.0...0.48.1

v0.48.0: : Intel GPU & Gaudi support, CUDA 13, performance improvements, and more!

Compare Source

Highlights

🎉 Intel GPU Support

We now officially support Intel GPUs on Linux and Windows! Support is included for all major features (LLM.int8(), QLoRA, 8bit optimizers) with the exception of the paged optimizer feature.

This support includes the following hardware:

  • Intel® Arc™ B-Series Graphics
  • Intel® Arc™ A-Series Graphics
  • Intel® Data Center GPU Max Series

A compatible PyTorch version with Intel XPU support is required. The current minimum is PyTorch 2.6.0. It is recommended to use the latest stable release. See Getting Started on Intel GPU for guidance.

🎉 Intel Gaudi Support

We now officially support Intel Gaudi2 and Gaudi3 accelerators. This support includes LLM.int8() and QLoRA with the NF4 data type. At this time optimizers are not implemented.

A compatible PyTorch version with Intel Gaudi support is required. The current minimum is Gaudi v1.21 with PyTorch 2.6.0. It is recommended to use the latest stable release. See the Gaudi software installation guide for guidance.

NVIDIA CUDA
  • The 4bit dequantization kernel was improved by @​Mhmd-Hisham in #​1746. This change brings noticeable speed improvements for prefill, batch token generation, and training. The improvement is particularly prominent on A100, H100, and B200.
  • We've added CUDA 13.0 compatibility across Linux x86-64, Linux aarch64, and Windows x86-64 platforms.
    • Hardware support for CUDA 13.0 is limited to Turing generation and newer.
    • Support for Thor (SM110) is available in the Linux aarch64 build.
🚨 Breaking Changes
  • Dropped support for PyTorch 2.2. The new minimum requirement is 2.3.0.
  • Removed Maxwell GPU support for all CUDA builds.

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.47.0...0.48.0

v0.47.0

Compare Source

Highlights:

  • FSDP2 compatibility for Params4bit (#​1719)
  • Bugfix for 4bit quantization with large block sizes (#​1721)
  • Further removal of previously deprecated code (#​1669)
  • Improved CPU coverage (#​1628)
  • Include NVIDIA Volta support in CUDA 12.8 and 12.9 builds (#​1715)

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.46.0...0.47.0

v0.46.1

Compare Source

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.46.0...0.46.1

v0.46.0: : torch.compile() support; custom ops refactor; Linux aarch64 wheels

Compare Source

Highlights

  • Support for torch.compile without graph breaks for LLM.int8().
    • Compatible with PyTorch 2.4+, but PyTorch 2.6+ is recommended.
    • Experimental CPU support is included.
  • Support torch.compile without graph breaks for 4bit.
    • Compatible with PyTorch 2.4+ for fullgraph=False.
    • Requires PyTorch 2.8 nightly for fullgraph=True.
  • We are now publishing wheels for CUDA Linux aarch64 (sbsa)!
    • Targets are Turing generation and newer: sm75, sm80, sm90, and sm100.
  • PyTorch Custom Operators refactoring and integration:
    • We have refactored most of the library code to integrate better with PyTorch via the torch.library and custom ops APIs. This helps enable our torch.compile and additional hardware compatibility efforts.
    • End-users do not need to change the way they are using bitsandbytes.
  • Unit tests have been cleaned up for increased determinism and most are now device-agnostic.
    • A new nightly CI runs unit tests for CPU (Windows x86-64, Linux x86-64/aarch64) and CUDA (Linux/Windows x86-64).

Compatability Changes

  • Support for Python 3.8 is dropped.
  • Support for PyTorch < 2.2.0 is dropped.
  • CUDA 12.6 and 12.8 builds are now compatible for manylinux_2_24 (previously manylinux_2_34).
  • Many APIs that were previously marked as deprecated have now been removed.
  • New deprecations:
    • bnb.autograd.get_inverse_transform_indices()
    • bnb.autograd.undo_layout()
    • bnb.functional.create_quantile_map()
    • bnb.functional.estimate_quantiles()
    • bnb.functional.get_colrow_absmax()
    • bnb.functional.get_row_absmax()
    • bnb.functional.histogram_scatter_add_2d()

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.45.4...0.46.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 2b20709 to fd56b77 Compare August 10, 2025 14:01
@renovate renovate bot changed the title fix(deps): update dependency bitsandbytes to ^0.46.0 fix(deps): update dependency bitsandbytes to ^0.47.0 Aug 11, 2025
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from fd56b77 to 7710697 Compare August 11, 2025 22:00
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 7710697 to 93001a8 Compare August 19, 2025 15:42
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 93001a8 to 827f088 Compare October 1, 2025 02:55
@renovate renovate bot changed the title fix(deps): update dependency bitsandbytes to ^0.47.0 fix(deps): update dependency bitsandbytes to ^0.48.0 Oct 1, 2025
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 827f088 to 7b8df92 Compare October 2, 2025 21:40
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 7b8df92 to d8c8cd3 Compare October 30, 2025 01:06
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from d8c8cd3 to 92411f6 Compare December 12, 2025 04:03
@renovate renovate bot changed the title fix(deps): update dependency bitsandbytes to ^0.48.0 fix(deps): update dependency bitsandbytes to ^0.49.0 Dec 12, 2025
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 92411f6 to 5c24012 Compare January 8, 2026 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants