Skip to content

Commit c924d9a

Browse files
authored
Delete hidden_size and num_attention_heads modification in a config (#198)
* fix * keep head_dim
1 parent e6deadd commit c924d9a

File tree

1 file changed

+0
-11
lines changed

1 file changed

+0
-11
lines changed

onnx_diagnostic/tasks/text_generation.py

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -43,18 +43,7 @@ def reduce_model_config(config: Any) -> Dict[str, Any]:
4343
if hasattr(config, "num_key_value_heads")
4444
else config.num_attention_heads
4545
),
46-
hidden_size=(
47-
min(config.hidden_size, 4096 // 4)
48-
if config.hidden_size % 64 == 0
49-
else config.hidden_size
50-
),
5146
)
52-
if config is None or hasattr(config, "intermediate_size"):
53-
kwargs["intermediate_size"] = (
54-
min(config.intermediate_size, 24576 // 4)
55-
if config.intermediate_size % 4 == 0
56-
else config.intermediate_size
57-
)
5847
update_config(config, kwargs)
5948
return kwargs
6049

0 commit comments

Comments
 (0)