Skip to content

Feature Request: LFM2-24B-A2B and LFM2-8B-A1B(lfm2moe architecture) support #1317

@hardWorker254

Description

@hardWorker254

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

What about adding support for LFM2-24B-A2B(https://huggingface.co/LiquidAI/LFM2-24B-A2B) and LFM2-8B-A1B(https://huggingface.co/LiquidAI/LFM2-8B-A1B)?

Motivation

Support for fast and lightweight MoE models

Possible Implementation

lfm2moe ggml-org/llama.cpp#16464

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions