Commit fea4d11
[HF] fix quantization config (#3039)
* Try fixing issue 3026 which is caused by the quantization_config argument introduced in Commit 758c5ed.
The argument is in Dict type, but for a GPTQ quantized model, it has a conflict with the huggingface interface which expects QuantizationConfigMixin type.
Current solution is removing quantization_config argument in HFLM._create_model() of lm_eval/models/huggingface.py.
Require further modification to restore the functionality provided by the previous commit.
* wrap quantization_config in AutoQuantizationConfig
* handle quantization config not dict
* wrap quantization_config in AutoQuantizationConfig if dict
---------
Co-authored-by: shanhx2000 <[email protected]>1 parent 6b3f3f7 commit fea4d11
1 file changed
+16
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
6 | | - | |
| 6 | + | |
7 | 7 | | |
8 | 8 | | |
9 | 9 | | |
| |||
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
20 | | - | |
21 | | - | |
22 | 20 | | |
23 | 21 | | |
24 | 22 | | |
| |||
40 | 38 | | |
41 | 39 | | |
42 | 40 | | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
43 | 44 | | |
44 | 45 | | |
45 | 46 | | |
| |||
188 | 189 | | |
189 | 190 | | |
190 | 191 | | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
191 | 199 | | |
192 | 200 | | |
193 | 201 | | |
| |||
205 | 213 | | |
206 | 214 | | |
207 | 215 | | |
208 | | - | |
| 216 | + | |
209 | 217 | | |
210 | 218 | | |
211 | 219 | | |
| |||
554 | 562 | | |
555 | 563 | | |
556 | 564 | | |
557 | | - | |
| 565 | + | |
558 | 566 | | |
559 | 567 | | |
560 | 568 | | |
| |||
649 | 657 | | |
650 | 658 | | |
651 | 659 | | |
| 660 | + | |
| 661 | + | |
| 662 | + | |
652 | 663 | | |
653 | 664 | | |
654 | 665 | | |
| |||
0 commit comments