You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -264,12 +264,13 @@ Deprecated:
264
264
265
265
Features:
266
266
- Added precompiled CUDA 11.8 binaries to support H100 GPUs without compilation #571
267
-
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md.
267
+
- CUDA SETUP now no longer looks for libcuda and libcudart and relies PyTorch CUDA libraries. To manually override this behavior see: how_to_use_nonpytorch_cuda.md. Thank you @rapsealk
268
268
269
269
Bug fixes:
270
270
- Fixed a bug where the default type of absmax was undefined which leads to errors if the default type is different than torch.float32. # 553
271
271
- Fixed a missing scipy dependency in requirements.txt. #544
272
272
- Fixed a bug, where a view operation could cause an error in 8-bit layers.
273
+
- Fixed a bug where CPU bitsandbytes would during the import. #593 Thank you @bilelomrani
273
274
274
275
Documentation:
275
276
- Improved documentation for GPUs that do not support 8-bit matmul. #529
0 commit comments