We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a777241 commit 48dbcb5Copy full SHA for 48dbcb5
src/llmcompressor/modifiers/awq/base.py
@@ -194,7 +194,7 @@ def validate_model_after(model: "AWQModifier") -> "AWQModifier":
194
warnings.warn(
195
"A strategy including activation quantization was detected. "
196
"AWQ was originally intended for weight-only quantization. "
197
- "Lower-precision activations are an experimental feautre, and "
+ "Lower-precision activations are an experimental feature, and "
198
"overall performance may be poor. If it is, consider using "
199
"`W4A16` or `W4A16_ASYM` quantization schemes instead."
200
)
0 commit comments