Skip to content

Commit ebdd12d

Browse files
authored
export static llama with masked softmax
Differential Revision: D81248691 Pull Request resolved: #13832
1 parent 9de4c16 commit ebdd12d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/models/llama/static_attention.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1044,5 +1044,5 @@ def transfer_weight(linear, conv2d):
10441044

10451045
@register_attention("static_mha")
10461046
class StaticAttentionMHA(StaticAttention):
1047-
def __init__(self, config: ModelArgs, layer_id: int, rope: Rope):
1048-
super().__init__(config, layer_id, rope, split_mha=False)
1047+
def __init__(self, config: ModelArgs, layer_id: int, rope: Rope, **kwargs: Any):
1048+
super().__init__(config, layer_id, rope, split_mha=False, **kwargs)

0 commit comments

Comments
 (0)