Skip to content

Commit eec0ab6

Browse files
authored
Addback: ConstantPruningModifier for finetuning cases (#1272)
We mistakenly also removed `ConstantPruningModifier` from finetuning examples as a part of #1267 This PR adds it back for fine-tuning examples. Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
1 parent b7d34e0 commit eec0ab6

File tree

2 files changed

+28
-0
lines changed

2 files changed

+28
-0
lines changed

examples/quantization_2of4_sparse_w4a16/2of4_w4a16_group-128_recipe.yaml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,20 @@ sparsity_stage:
66
mask_structure: "2:4"
77
targets: ["Linear"]
88
ignore: ["re:.*lm_head"]
9+
finetuning_stage:
10+
run_type: train
11+
finetuning_modifiers:
12+
ConstantPruningModifier:
13+
targets: [
14+
're:.*q_proj.weight',
15+
're:.*k_proj.weight',
16+
're:.*v_proj.weight',
17+
're:.*o_proj.weight',
18+
're:.*gate_proj.weight',
19+
're:.*up_proj.weight',
20+
're:.*down_proj.weight',
21+
]
22+
start: 0
923
quantization_stage:
1024
run_type: oneshot
1125
quantization_modifiers:

examples/quantization_2of4_sparse_w4a16/2of4_w4a16_recipe.yaml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,20 @@ sparsity_stage:
66
mask_structure: "2:4"
77
targets: ["Linear"]
88
ignore: ["re:.*lm_head"]
9+
finetuning_stage:
10+
run_type: train
11+
finetuning_modifiers:
12+
ConstantPruningModifier:
13+
targets: [
14+
're:.*q_proj.weight',
15+
're:.*k_proj.weight',
16+
're:.*v_proj.weight',
17+
're:.*o_proj.weight',
18+
're:.*gate_proj.weight',
19+
're:.*up_proj.weight',
20+
're:.*down_proj.weight',
21+
]
22+
start: 0
923
quantization_stage:
1024
run_type: oneshot
1125
quantization_modifiers:

0 commit comments

Comments
 (0)