Skip to content

Commit 7326e38

Browse files
committed
Clean up code, more
Signed-off-by: Jingyu Xin <[email protected]>
1 parent 913e535 commit 7326e38

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

modelopt/torch/quantization/conversion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ def replace_quant_module(model: nn.Module, version=None, registry=QuantModuleReg
176176
"""Recursively replace the module with quantized module."""
177177
from .plugins.custom import (
178178
register_custom_model_plugins_on_the_fly,
179-
register_custom_post_conversion_plugins, ## not needed for lora megatron
179+
register_custom_post_conversion_plugins,
180180
)
181181

182182
assert not is_quantized(model), "Model must not be quantized!"

0 commit comments

Comments
 (0)