Skip to content

Commit 4bde0fe

Browse files
authored
[Examples] [Bugfix] Fix debug message (#1529)
## Purpose ## * Fix faulty debug message ## Changes ## * Remove unnecessary commas in print string Signed-off-by: Kyle Sayers <[email protected]>
1 parent dc2e9b0 commit 4bde0fe

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,8 @@
9090
tokenizer.save_pretrained(f"{output_dir}/quantization_stage")
9191

9292
logger.info(
93-
"llmcompressor does not currently support running ",
93+
"llmcompressor does not currently support running "
9494
"compressed models in the marlin24 format. "
95-
"The model produced from this example can be ",
96-
"run on vLLM with dtype=torch.float16.",
95+
"The model produced from this example can be "
96+
"run on vLLM with dtype=torch.float16."
9797
)

0 commit comments

Comments
 (0)