Skip to content

Commit deeffa6

Browse files
committed
fix: Remove llmcompressor oneshot import deprecation warning
Signed-off-by: Brandon Groth <[email protected]>
1 parent a9eafcd commit deeffa6

File tree

2 files changed

+4
-2
lines changed

2 files changed

+4
-2
lines changed

examples/FP8_QUANT/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,8 @@ This end-to-end example utilizes the common set of interfaces provided by `fms_m
9292

9393
```python
9494
from llmcompressor.modifiers.quantization import QuantizationModifier
95-
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
95+
from llmcompressor.transformers import SparseAutoModelForCausalLM
96+
from llmcompressor import oneshot
9697
9798
model = SparseAutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, torch_dtype=model_args.torch_dtype)
9899
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path)

fms_mo/run_quant.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,8 +198,9 @@ def run_fp8(model_args, data_args, opt_args, fp8_args):
198198
"""
199199

200200
# Third Party
201+
from llmcompressor import oneshot
201202
from llmcompressor.modifiers.quantization import QuantizationModifier
202-
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
203+
from llmcompressor.transformers import SparseAutoModelForCausalLM
203204

204205
logger = set_log_level(opt_args.log_level, "fms_mo.run_fp8")
205206

0 commit comments

Comments
 (0)