Prerequisites
Feature Description
What about adding support for LFM2-24B-A2B(https://huggingface.co/LiquidAI/LFM2-24B-A2B) and LFM2-8B-A1B(https://huggingface.co/LiquidAI/LFM2-8B-A1B)?
Motivation
Support for fast and lightweight MoE models
Possible Implementation
lfm2moe ggml-org/llama.cpp#16464