Skip to content

Commit c5e8330

Browse files
authored
[Bugfix] Fix null modules_to_not_convert in FBGEMM Fp8 quantization (#6665)
1 parent e0c1575 commit c5e8330

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/model_executor/layers/quantization/fbgemm_fp8.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ class FBGEMMFp8Config(QuantizationConfig):
3131
"""Config class for FBGEMM Fp8."""
3232

3333
def __init__(self, ignore_list: List[str], input_scale_ub: float):
34-
self.ignore_list = ignore_list
34+
self.ignore_list = ignore_list if ignore_list else []
3535
self.input_scale_ub = input_scale_ub
3636

3737
# For GPUs that lack FP8 hardware support, we can leverage the Marlin

0 commit comments

Comments
 (0)