Skip to content

huggingface/optimum-executorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

80 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€— Optimum ExecuTorch

Optimize and deploy Hugging Face models with ExecuTorch

Documentation | ExecuTorch | Hugging Face

πŸ“‹ Overview

Optimum ExecuTorch enables efficient deployment of transformer models using Meta's ExecuTorch framework. It provides:

  • πŸ”„ Easy conversion of Hugging Face models to ExecuTorch format
  • ⚑ Optimized inference with hardware-specific optimizations
  • 🀝 Seamless integration with Hugging Face Transformers
  • πŸ“± Efficient deployment on various devices

⚑ Quick Installation

1. Create a virtual environment

Install conda on your machine. Then, create a virtual environment to manage our dependencies.

conda create -n optimum-executorch python=3.11
conda activate optimum-executorch

2. Install optimum-executorch from source

git clone https://github.com/huggingface/optimum-executorch.git
cd optimum-executorch
pip install '.[dev]'
  • πŸ”œ Install from pypi coming soon...

3. Install dependencies in dev mode

To access every available optimization and experiment with the newest features, run:

python install_dev.py

This script will install executorch, torch, torchao, transformers, etc. from nightly builds or from source to access the latest models and optimizations.

To leave an existing ExecuTorch installation untouched, run install_dev.py with --skip_override_torch to prevent it from being overwritten.

🎯 Quick Start

There are two ways to use Optimum ExecuTorch:

Option 1: Export and Load in One Python API

from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer

# Load and export the model on-the-fly
model_id = "HuggingFaceTB/SmolLM2-135M-Instruct"
model = ExecuTorchModelForCausalLM.from_pretrained(
    model_id,
    recipe="xnnpack",
    attn_implementation="custom_sdpa",  # Use custom SDPA implementation for better performance
    use_custom_kv_cache=True,  # Use custom KV cache for better performance
    **{"qlinear": "8da4w", "qembedding": "8w"},  # Quantize linear and embedding layers
)

# Generate text right away
tokenizer = AutoTokenizer.from_pretrained(model_id)
generated_text = model.text_generation(
    tokenizer=tokenizer,
    prompt="Once upon a time",
    max_seq_len=128,
)
print(generated_text)

Note: If an ExecuTorch model is already cached on the Hugging Face Hub, the API will automatically skip the export step and load the cached .pte file. To test this, replace the model_id in the example above with "executorch-community/SmolLM2-135M", where the .pte file is pre-cached. Additionally, the .pte file can be directly associated with the eager model, as demonstrated in this example.

Option 2: Export and Load Separately

Step 1: Export your model

Use the CLI tool to convert your model to ExecuTorch format:

optimum-cli export executorch \
    --model "HuggingFaceTB/SmolLM2-135M-Instruct" \
    --task "text-generation" \
    --recipe "xnnpack" \
    --use_custom_sdpa \
    --use_custom_kv_cache \
    --qlinear 8da4w \
    --qembedding 8w \
    --output_dir="hf_smollm2"

Explore the various export options by running the command: optimum-cli export executorch --help

Step 2: Validate the Exported Model on Host Using the Python API

Use the exported model for text generation:

from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer

# Load the exported model
model = ExecuTorchModelForCausalLM.from_pretrained("./hf_smollm2")

# Initialize tokenizer and generate text
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
generated_text = model.text_generation(
    tokenizer=tokenizer,
    prompt="Once upon a time",
    max_seq_len=128
)
print(generated_text)

Step 3: Run inference on-device

To perform on-device inference, you can use ExecuTorch’s sample runner or the example iOS/Android applications. For detailed instructions, refer to the ExecuTorch Sample Runner guide.

βš™οΈ Optimizations

Custom Operators

Optimum transformer models utilize:

  • A custom SDPA for CPU based on Flash Attention, boosting performance by around 3x compared to default SDPA.
  • A custom KV cache that uses a custom op for efficient in-place cache update on CPU, boosting performance by 2.5x compared to default static KV cache.

Backends Delegation

Currently, Optimum-ExecuTorch supports the XNNPACK Backend for CPU and CoreML Backend for GPU on Apple devices.

For a comprehensive overview of all backends supported by ExecuTorch, please refer to the ExecuTorch Backend Overview.

Quantization

We currently support Post-Training Quantization (PTQ) for linear layers and embeddings using the TorchAO quantization library.

πŸ€— Supported Models

The following models have been successfully tested with Executorch. For details on the specific optimizations supported and how to use them for each model, please consult their respective test files in the tests/models/ directory.

Text Models

We currently support a wide range of popular transformer models, including encoder-only, decoder-only, and encoder-decoder architectures, as well as models specialized for various tasks like text generation, translation, summarization, and mask prediction, etc. These models reflect the current trends and popularity across the Hugging Face community:

LLMs (Large Language Models)

Decoder-only
  • Codegen: Salesforce's codegen-350M-mono and its variants
  • Gemma: Gemma-2b and its variants
  • Gemma2: Gemma-2-2b and its variants
  • Gemma3: Gemma-3-1b and its variants (πŸ’‘[NEW] 270M, 1B)
  • Glm: glm-edge-1.5b and its variants
  • Gpt2: gpt-sw3-126m and its variants
  • GptJ: gpt-j-405M and its variants
  • GptNeoX: EleutherAI's pythia-14m and its variants
  • GptNeoXJapanese: gpt-neox-japanese-2.7b and its variants
  • Granite: granite-3.3-2b-instruct and its variants
  • Llama: Llama-3.2-1B and its variants
  • Mistral: Ministral-3b-instruct and its variants
  • Qwen2: Qwen2.5-0.5B and its variants
  • Qwen3: Qwen3-0.6B, Qwen3-Embedding-0.6B and other variants
  • Olmo: OLMo-1B-hf and its variants
  • Phi: JSL-MedPhi2-2.7B and its variants
  • Phi4: Phi-4-mini-instruct and its variants
  • Smollm: πŸ€— SmolLM2-135M and its variants
  • Smollm3: πŸ€— SmolLM3-3B and its variants
  • Starcoder2: starcoder2-3b and its variants
Encoder-decoder (Seq2Seq)
  • T5: Google's T5 and its variants

NLU (Natural Language Understanding)

  • Albert: albert-base-v2 and its variants
  • Bert: Google's bert-base-uncased and its variants
  • Distilbert: distilbert-base-uncased and its variants
  • Eurobert: EuroBERT-210m and its variants
  • Roberta: FacebookAI's xlm-roberta-base and its variants

Vision Models

  • Cvt: Convolutional Vision Transformer
  • Deit: Distilled Data-efficient Image Transformer (base-sized)
  • Dit: Document Image Transformer (base-sized)
  • EfficientNet: EfficientNet (b0-b7 sized)
  • Focalnet: FocalNet (tiny-sized)
  • Mobilevit: Apple's MobileViT xx-small
  • Mobilevit2: Apple's MobileViTv2
  • Pvt: Pyramid Vision Transformer (tiny-sized)
  • Swin: Swin Transformer (tiny-sized)

Audio Models

ASR (Automatic Speech Recognition)

  • Whisper: OpenAI's Whisper and its variants

Speech text-to-text (Automatic Speech Recognition)

  • πŸ’‘[NEW] Voxtral: Mistral's newest speech/text-to-text model

πŸ“Œ Note: This list is continuously expanding. As we continue to expand support, more models will be added.

πŸš€ Benchmarks on Mobile Devices

The following benchmarks show example decode performance (tokens/sec) across Android and iOS devices for popular edge LLMs.

Model Samsung Galaxy S22 5G
(Android 13)
Samsung Galaxy S22 Ultra 5G
(Android 14)
iPhone 15
(iOS 18.0)
iPhone 15 Plus
(iOS 17.4.1)
iPhone 15 Pro
(iOS 18.4.1)
SmolLM2-135M 202.28 202.61 7.47 6.43 29.64
Qwen3-0.6B 59.16 56.49 7.05 5.48 17.99
google/gemma-3-1b-it 25.07 23.89 21.51 21.33 17.8
Llama-3.2-1B 44.91 37.39 11.04 8.93 25.78
OLMo-1B 44.98 38.22 14.49 8.72 20.24

πŸ“Š View Live Benchmarks: Explore comprehensive performance data, compare models across devices, and track performance trends over time on the ExecuTorch Benchmark Dashboard.

Performance measured with custom SDPA, KV-cache optimization, and 8da4w quantization. Results may vary based on device conditions and prompt characteristics.

πŸ› οΈ Advanced Usage

Check our ExecuTorch GitHub repo directly for:

  • More backends and performance optimization options
  • Deployment guides for Android, iOS, and embedded devices
  • Additional examples and benchmarks

🀝 Contributing

We love your input! We want to make contributing to Optimum ExecuTorch as easy and transparent as possible. Check out our:

πŸ“ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

πŸ“« Get in Touch

About

πŸ€— Optimum ExecuTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published