Skip to content

Commit 3e18dd9

Browse files
aladerrandsikka
andauthored
Add an awq_sym lm-eval test (#1702)
SUMMARY: In response to #1688, add an awq_asym lm-eval test config. TEST PLAN: ``` export CADENCE=weekly export TEST_DATA_FILE=tests/lmeval/configs/w4a16_awq_asym.yaml python -m pytest tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval -v ``` MISC: As Llama is a gated model, I'm using a local model path instead of meta-llama/Meta-Llama-3-8B-Instruct during local testing. I'm not sure if this might affect the test flow / accuracy. It would be great if you could help me verify it. That's also why I've marked it as WIP. --------- Signed-off-by: aladerran <[email protected]> Co-authored-by: Dipika Sikka <[email protected]>
1 parent ad20cf1 commit 3e18dd9

File tree

2 files changed

+23
-0
lines changed

2 files changed

+23
-0
lines changed
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
quant_stage:
2+
quant_modifiers:
3+
AWQModifier:
4+
ignore: ["lm_head"]
5+
config_groups:
6+
group_0:
7+
weights:
8+
num_bits: 4
9+
type: "int"
10+
symmetric: true
11+
strategy: "group"
12+
group_size: 128
13+
targets: ["Linear"]
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
cadence: "weekly"
2+
model: meta-llama/Meta-Llama-3-8B-Instruct
3+
scheme: W4A16
4+
recipe: tests/e2e/vLLM/recipes/AWQ/recipe_w4a16_awq_sym.yaml
5+
dataset_id: HuggingFaceH4/ultrachat_200k
6+
dataset_split: train_sft
7+
lmeval:
8+
metrics:
9+
exact_match,flexible-extract: 0.70
10+
exact_match,strict-match: 0.70

0 commit comments

Comments
 (0)