Releases: GreenBitAI/bitorch-engine
Releases · GreenBitAI/bitorch-engine
Release v0.2.6
Added
- Installation instructions for binary releases
- Warning if non-customized PyTorch version is detected which can not calculate gradients for non-complex tensor types
Changed
- Updated development scripts for binary releases
- Adjusting rpaths in .so files (based on PyTorch's implemented solution)
- Docker base image changed to manywheel builder image
Release v0.2.5
[0.2.5] - 2024/05/24
Added
- Development scripts for preparing binary releases
Changed
- Updated build instructions to clarify torchvision installation
- Adapted
setup.pylogic for preparing binary releases
Fixed
- Broken build process by setting setuptools version
Release v0.2.4
Added
- Tuned the hyperparameters of DiodeMix optimizer for sft.
- Added sft-support for the classical gptq-style models.
- Implemented qzeros update in finetuning process.
Updated
- Extended pack_fp_weight function.
- Enhanced the performance of MPQLinearCUDA layer.
Fixed
- Fixed various errors in DiodeMix update function.
Release v0.2.3
Updated
- Enhanced the performance of the MBWQ linear layer for processing long sequences, addressing previous inefficiencies.
Release v0.2.2
Updated
- Building instructions (adding a section for cutlass)
- Checksums for custom torch builds (within docker)
Fixed
- An error in
pack_fp_weight
Release v0.2.1
Fixed
- Broken links in README.md and index.rst
Release v0.2.0
Added
- Quantized layers with different acceleration options
- QConv (binary, quantized) - CPU, Cutlass
- QLinear (binary, quantized, mixed bit-width) - CUDA, Cutlass, MPS
- QEmbedding (binary)
- Optimizer(s) for quantized layers
- Hybrid optimizer
diode_betabased on Diode v1 (binary) and AdamW (quantized) for memory-efficient training - Initial support for galore projection
- Hybrid optimizer
- Examples
- MNIST training script with and without PyTorch Lightning