Skip to content

Commit 6d600d4

Browse files
authored
Fix CI/CD failures (#2359)
SUMMARY: autoround: - Test was previously using int weights with float activations which silently fails with torch 2.9 but results in a failure for 2.10 - Fix the args to appropriately use a valid scheme where weights are also float quant_reload: - Remove old unused argument - Set tie_word_embeddings to false to account for what the test is targeting - I believe we’re seeing this now from recent compressed-tensors changes cc @kylesayrs
1 parent 302c2c7 commit 6d600d4

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

tests/llmcompressor/transformers/autoround/test_autoround_oneshot.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@
5656
config_groups={
5757
"group_0": QuantizationScheme(
5858
targets=["Linear"],
59-
weights=QuantizationArgs(num_bits=8, strategy="channel"),
59+
weights=QuantizationArgs(num_bits=8, type="float", strategy="channel"),
6060
input_activations=QuantizationArgs(
6161
num_bits=8, type="float", strategy="token", dynamic=True
6262
),

tests/llmcompressor/transformers/compression/test_compress_tensor_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ def test_quant_model_reload(format, dtype, tmp_path):
175175
concatenate_data=concatenate_data,
176176
splits=splits,
177177
precision=dtype,
178-
clear_sparse_session=False,
178+
tie_word_embeddings=False,
179179
)
180180

181181
# Fetch the oneshot model

0 commit comments

Comments
 (0)