Skip to content

Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

License

Notifications You must be signed in to change notification settings

meghsat/lemonade

 
 

Repository files navigation

🍋 Lemonade: Local LLMs with GPU and NPU acceleration

Discord Lemonade tests Windows 11 Ubuntu 24.04 | 25.04 Made with Python PRs Welcome Latest Release GitHub downloads GitHub issues License: Apache Code style: black Star History Chart

Lemonade Banner

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs.

Apps like n8n, VS Code Copilot, Morphik, and many more use Lemonade to seamlessly run LLMs on any PC.

Getting Started

  1. Install: Windows · Ubuntu · Source
  2. Get Models: Browse and download with the Model Manager
  3. Chat: Try models with the built-in chat interface
  4. Connect: Use Lemonade with your favorite apps:

Open WebUI  n8n  Gaia  Infinity Arcade  Continue  GitHub Copilot  OpenHands  Dify  Deep Tutor  Iterate.ai

Want your app featured here? Discord · GitHub Issue · Email

Using the CLI

To run and chat with Gemma 3:

lemonade-server run Gemma-3-4b-it-GGUF

To install models ahead of time, use the pull command:

lemonade-server pull Gemma-3-4b-it-GGUF

To check all models available, use the list command:

lemonade-server list

Tip: You can use --llamacpp vulkan/rocm to select a backend when running GGUF models.

Model Library

Model Manager

Lemonade supports GGUF, FLM, and ONNX models across CPU, GPU, and NPU (see supported configurations).

Use lemonade-server pull or the built-in Model Manager to download models. You can also import custom GGUF/ONNX models from Hugging Face.

Browse all built-in models →


Supported Configurations

Lemonade supports the following configurations, while also making it easy to switch between them at runtime. Find more information about it here.

Hardware Engine: OGA Engine: llamacpp Engine: FLM Windows Linux
🧠 CPU All platforms All platforms -
🎮 GPU Vulkan: All platforms
ROCm: Selected AMD platforms*
Metal: Apple Silicon
🤖 NPU AMD Ryzen™ AI 300 series Ryzen™ AI 300 series
* See supported AMD ROCm platforms
Architecture Platform Support GPU Models
gfx1151 (STX Halo) Windows, Ubuntu Ryzen AI MAX+ Pro 395
gfx120X (RDNA4) Windows, Ubuntu Radeon AI PRO R9700, RX 9070 XT/GRE/9070, RX 9060 XT
gfx110X (RDNA3) Windows, Ubuntu Radeon PRO W7900/W7800/W7700/V710, RX 7900 XTX/XT/GRE, RX 7800 XT, RX 7700 XT

Project Roadmap

Under Development Under Consideration Recently Completed
Image Generation vLLM support General speech-to-text support (whisper.cpp)
Check back in 2026 :) Handheld devices: Ryzen AI Z2 Extreme APUs ROCm support for Ryzen AI 360-375 (Strix) APUs
Text to speech Lemonade desktop app

Integrate Lemonade Server with Your Application

You can use any OpenAI-compatible client library by configuring it to use http://localhost:8000/api/v1 as the base URL. A table containing official and popular OpenAI clients on different languages is shown below.

Feel free to pick and choose your preferred language.

Python C++ Java C# Node.js Go Ruby Rust PHP
openai-python openai-cpp openai-java openai-dotnet openai-node go-openai ruby-openai async-openai openai-php

Python Client Example

from openai import OpenAI

# Initialize the client to use Lemonade Server
client = OpenAI(
    base_url="http://localhost:8000/api/v1",
    api_key="lemonade"  # required but unused
)

# Create a chat completion
completion = client.chat.completions.create(
    model="Llama-3.2-1B-Instruct-Hybrid",  # or any other available model
    messages=[
        {"role": "user", "content": "What is the capital of France?"}
    ]
)

# Print the response
print(completion.choices[0].message.content)

For more detailed integration instructions, see the Integration Guide.

Beyond an LLM Server

The Lemonade Python SDK is also available, which includes the following components:

  • 🐍 Lemonade Python API: High-level Python API to directly integrate Lemonade LLMs into Python applications.
  • 🖥️ Lemonade CLI: The lemonade CLI lets you mix-and-match LLMs (ONNX, GGUF, SafeTensors) with prompting templates, accuracy testing, performance benchmarking, and memory profiling to characterize your models on your hardware.

FAQ

To read our frequently asked questions, see our FAQ Guide

Contributing

We are actively seeking collaborators from across the industry. If you would like to contribute to this project, please check out our contribution guide.

New contributors can find beginner-friendly issues tagged with "Good First Issue" to get started.

Good First Issue

Maintainers

This project is sponsored by AMD. It is maintained by @danielholanda @jeremyfowers @ramkrishna @vgodsoe in equal measure. You can reach us by filing an issue, emailing lemonade@amd.com, or joining our Discord.

License and Attribution

This project is:

About

Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 42.3%
  • C++ 34.0%
  • TypeScript 9.3%
  • JavaScript 5.6%
  • CSS 5.3%
  • HTML 1.7%
  • Other 1.8%