fix(deps): update dependency bitsandbytes to ^0.49.0 #343
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
^0.45.3→^0.49.0Release Notes
bitsandbytes-foundation/bitsandbytes (bitsandbytes)
v0.49.1Compare Source
What's Changed
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.49.0...0.49.1
v0.49.0Compare Source
Highlights
x86-64 CPU Improvements
CPU performance for 4bit is significantly improved on x86-64, with optimized kernel paths for CPUs that have AVX512 or AVX512BF16 support.
AMD ROCm Experimental Wheels
macOS 14+ Wheels
🚨 Breaking Changes
What's Changed
New Contributors
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.2...0.49.0
v0.48.2Compare Source
What's Changed
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.1...0.48.2
v0.48.1Compare Source
This release fixes a regression introduced in 0.48.0 related to LLM.int8(). This issue caused poor inference results with pre-quantized checkpoints in HF transformers.
What's Changed
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.48.0...0.48.1
v0.48.0: : Intel GPU & Gaudi support, CUDA 13, performance improvements, and more!Compare Source
Highlights
🎉 Intel GPU Support
We now officially support Intel GPUs on Linux and Windows! Support is included for all major features (LLM.int8(), QLoRA, 8bit optimizers) with the exception of the paged optimizer feature.
This support includes the following hardware:
A compatible PyTorch version with Intel XPU support is required. The current minimum is PyTorch 2.6.0. It is recommended to use the latest stable release. See Getting Started on Intel GPU for guidance.
🎉 Intel Gaudi Support
We now officially support Intel Gaudi2 and Gaudi3 accelerators. This support includes LLM.int8() and QLoRA with the NF4 data type. At this time optimizers are not implemented.
A compatible PyTorch version with Intel Gaudi support is required. The current minimum is Gaudi v1.21 with PyTorch 2.6.0. It is recommended to use the latest stable release. See the Gaudi software installation guide for guidance.
NVIDIA CUDA
🚨 Breaking Changes
What's Changed
nn.Parameterby @matthewdouglas in #1720New Contributors
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.47.0...0.48.0
v0.47.0Compare Source
Highlights:
What's Changed
New Contributors
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.46.0...0.47.0
v0.46.1Compare Source
What's Changed
New Contributors
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.46.0...0.46.1
v0.46.0: : torch.compile() support; custom ops refactor; Linux aarch64 wheelsCompare Source
Highlights
torch.compilewithout graph breaks for LLM.int8().torch.compilewithout graph breaks for 4bit.fullgraph=False.fullgraph=True.torch.libraryand custom ops APIs. This helps enable ourtorch.compileand additional hardware compatibility efforts.bitsandbytes.Compatability Changes
manylinux_2_24(previouslymanylinux_2_34).What's Changed
New Contributors
Full Changelog: bitsandbytes-foundation/bitsandbytes@0.45.4...0.46.0
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.