You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ Features:
49
49
Bug fixes:
50
50
- Fixed a bug where weight decay was incorrectly applied to 32-bit Adam. #13
51
51
- Fixed an unsafe use of eval. #8
52
-
- Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (`bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())`). #13#15
52
+
- Fixed a bug where the StableEmbedding layer 32-bit optimizer override would not work without registering the whole model first (`bnb.optim.GlobalOptimManager.get_instance().register_parameters(model.parameters())`). #13#15
53
53
54
54
Docs:
55
55
- Added instructions how to solve "\_\_fatbinwrap_" errors.
@@ -149,3 +149,9 @@ Bug fixes:
149
149
150
150
Bug fixes:
151
151
- Fixed a bug in the CUDA Setup which led to an incomprehensible error if no GPU was detected.
152
+
153
+
### 0.35.4
154
+
155
+
Bug fixes:
156
+
- Fixed a bug in the CUDA Setup failed with the cuda runtime was found, but not the cuda library.
157
+
- Fixed a bug where not finding the cuda runtime led to an incomprehensible error.
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# bitsandbytes
2
2
3
-
The bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions.
3
+
The bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions.
4
4
5
5
6
6
@@ -48,7 +48,7 @@ out = linear(x.to(torch.float16))
48
48
49
49
Requirements: anaconda, cudatoolkit, pytorch
50
50
51
-
Hardware requirements:
51
+
Hardware requirements:
52
52
- LLM.int8(): NVIDIA Turing (RTX 20xx; T4) or Ampere GPU (RTX 30xx; A4-A100); (a GPU from 2018 or older).
53
53
- 8-bit optimizers and quantization: NVIDIA Maxwell GPU or newer (>=GTX 9XX).
54
54
@@ -87,7 +87,7 @@ Note that by default all parameter tensors with less than 4096 elements are kept
87
87
```
88
88
# parameter tensors with less than 16384 values are optimized in 32-bit
89
89
# it is recommended to use multiplies of 4096
90
-
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384)
90
+
adam = bnb.optim.Adam8bit(model.parameters(), min_8bit_size=16384)
91
91
```
92
92
93
93
### Change Bits and other Hyperparameters for Individual Parameters
0 commit comments