You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implements complete support for Megrez-MoE (Mixture of Experts) models:
- Add LLM_ARCH_MEGREZ_MOE architecture enum and mappings
- Implement build_mergez_moe_ffn() with sigmoid+bias gating
- Add llm_build_megrez_moe class for full model graph construction
- Support 31-layer architecture (layer 0: dense FFN, layers 1-30: MoE)
- Implement expert sharing pattern with 64 experts, 6 used per token, 4 shared
- Load all model hyperparameters and 372 tensors correctly
- Configure NEOX RoPE type for proper positional encoding
Tested with Megrez2-3x7B-A3B_Q4_K_M.gguf model.
All 39 llama.cpp tests pass successfully.
Output verified to match infinigence/llama.cpp reference implementation.
Note: Use --no-warmup flag to avoid warmup memory allocation issue.
0 commit comments