Skip to content

Commit a9ed330

Browse files
committed
fix outplace_group_moe_fake
1 parent a1edc02 commit a9ed330

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

lightllm/common/fused_moe/grouped_fused_moe.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1000,13 +1000,13 @@ def outplace_fused_experts_impl_fake(
10001000
hidden_states: torch.Tensor,
10011001
w1: torch.Tensor,
10021002
w2: torch.Tensor,
1003-
# optional bias for w1 and w2
1004-
w1_bias: Optional[torch.Tensor],
1005-
w2_bias: Optional[torch.Tensor],
10061003
topk_weights: torch.Tensor,
10071004
topk_ids: torch.Tensor,
10081005
use_fp8_w8a8: bool = False,
10091006
use_int8_w8a16: bool = False,
1007+
# optional bias for w1 and w2
1008+
w1_bias: Optional[torch.Tensor] = None,
1009+
w2_bias: Optional[torch.Tensor] = None,
10101010
w1_scale: Optional[torch.Tensor] = None,
10111011
w2_scale: Optional[torch.Tensor] = None,
10121012
a1_scale: Optional[torch.Tensor] = None,

0 commit comments

Comments
 (0)