|
| 1 | +# llama-server Development Documentation |
| 2 | + |
| 3 | +This document provides an in-depth technical overview of `llama-server`, intended for maintainers and contributors. |
| 4 | + |
| 5 | +If you are an end user consuming `llama-server` as a product, please refer to the main [README](./README.md) instead. |
| 6 | + |
| 7 | +## Backend |
| 8 | + |
| 9 | +### Overview |
| 10 | + |
| 11 | +The server supports two primary operating modes: |
| 12 | + |
| 13 | +- **Inference mode**: The default mode for performing inference with a single loaded GGUF model. |
| 14 | +- **Router mode**: Enables management of multiple inference server instances behind a single API endpoint. Requests are automatically routed to the appropriate backend instance based on the requested model. |
| 15 | + |
| 16 | +The core architecture consists of the following components: |
| 17 | + |
| 18 | +- `server_context`: Holds the primary inference state, including the main `llama_context` and all active slots. |
| 19 | +- `server_slot`: An abstraction over a single “sequence” in llama.cpp, responsible for managing individual parallel inference requests. |
| 20 | +- `server_routes`: Middleware layer between `server_context` and the HTTP interface; handles JSON parsing/formatting and request routing logic. |
| 21 | +- `server_http_context`: Implements the HTTP server using `cpp-httplib`. |
| 22 | +- `server_queue`: Thread-safe queue used by HTTP workers to submit new tasks to `server_context`. |
| 23 | +- `server_response`: Thread-safe queue used by `server_context` to return results to HTTP workers. |
| 24 | +- `server_response_reader`: Higher-level wrapper around the two queues above for cleaner code. |
| 25 | +- `server_task`: Unit of work pushed into `server_queue`. |
| 26 | +- `server_task_result`: Unit of result pushed into `server_response`. |
| 27 | +- `server_tokens`: Unified representation of token sequences (supports both text and multimodal tokens); used by `server_task` and `server_slot`. |
| 28 | +- `server_prompt_checkpoint`: For recurrent (e.g., RWKV) and SWA models, stores snapshots of KV cache state. Enables reuse when subsequent requests share the same prompt prefix, saving redundant computation. |
| 29 | +- `server_models`: Standalone component for managing multiple backend instances (used in router mode). It is completely independent of `server_context`. |
| 30 | + |
| 31 | +```mermaid |
| 32 | +graph TD |
| 33 | + API_User <--> server_http_context |
| 34 | + server_http_context <-- router mode --> server_models |
| 35 | + server_http_context <-- inference mode --> server_routes |
| 36 | + server_routes -- server_task --> server_queue |
| 37 | + subgraph server_context |
| 38 | + server_queue --> server_slot |
| 39 | + server_slot -- server_task_result --> server_response |
| 40 | + server_slot[multiple server_slot] |
| 41 | + end |
| 42 | + server_response --> server_routes |
| 43 | +``` |
| 44 | + |
| 45 | +TODO: mention about how batching is handled by `server_slot` |
| 46 | + |
| 47 | +### Thread Management |
| 48 | + |
| 49 | +`server_context` runs on a dedicated single thread. Because it is single-threaded, heavy post-processing (especially after token generation) should be avoided, as it directly impacts multi-sequence throughput. |
| 50 | + |
| 51 | +Each incoming HTTP request is handled by its own thread managed by the HTTP library. The following operations are performed in HTTP worker threads: |
| 52 | + |
| 53 | +- JSON request parsing |
| 54 | +- Chat template application |
| 55 | +- Tokenization |
| 56 | +- Conversion of `server_task_result` into final JSON response |
| 57 | +- Error formatting into JSON |
| 58 | +- Tracking of partial/incremental responses (e.g., streaming tool calls or reasoning steps) |
| 59 | + |
| 60 | +**Best practices to follow:** |
| 61 | + |
| 62 | +- All JSON formatting and chat template logic must stay in the HTTP layer. |
| 63 | +- Avoid passing raw JSON between the HTTP layer and `server_slot`. Instead, parse everything into native C++ types as early as possible. |
| 64 | + |
| 65 | +### Testing |
| 66 | + |
| 67 | +`llama-server` includes an automated test suite based on `pytest`. |
| 68 | + |
| 69 | +The framework automatically starts a `llama-server` instance, sends requests, and validates responses. |
| 70 | + |
| 71 | +For detailed instructions, see the [test documentation](./tests/README.md). |
| 72 | + |
| 73 | +### Notable Related PRs |
| 74 | + |
| 75 | +- Initial server implementation: https://github.com/ggml-org/llama.cpp/pull/1443 |
| 76 | +- Parallel decoding support: https://github.com/ggml-org/llama.cpp/pull/3228 |
| 77 | +- Refactor introducing `server_queue` and `server_response`: https://github.com/ggml-org/llama.cpp/pull/5065 |
| 78 | +- Reranking endpoint: https://github.com/ggml-org/llama.cpp/pull/9510 |
| 79 | +- Multimodal model support (`libmtmd`): https://github.com/ggml-org/llama.cpp/pull/12898 |
| 80 | +- Unified KV cache handling: https://github.com/ggml-org/llama.cpp/pull/16736 |
| 81 | +- Separation of HTTP logic into dedicated files: https://github.com/ggml-org/llama.cpp/pull/17216 |
| 82 | +- Large-scale code base split into smaller files: https://github.com/ggml-org/llama.cpp/pull/17362 |
| 83 | +- Introduction of router mode: https://github.com/ggml-org/llama.cpp/pull/17470 |
| 84 | + |
| 85 | + |
| 86 | + |
| 87 | + |
| 88 | +## Web UI |
| 89 | + |
| 90 | +The project includes a web-based user interface for interacting with `llama-server`. It supports both single-model (`MODEL` mode) and multi-model (`ROUTER` mode) operation. |
| 91 | + |
| 92 | +The SvelteKit-based Web UI is introduced in this PR: https://github.com/ggml-org/llama.cpp/pull/14839 |
| 93 | + |
| 94 | +### Features |
| 95 | + |
| 96 | +- **Chat interface** with streaming responses |
| 97 | +- **Multi-model support** (ROUTER mode) - switch between models, auto-load on selection |
| 98 | +- **Modality validation** - ensures selected model supports conversation's attachments (images, audio) |
| 99 | +- **Conversation management** - branching, regeneration, editing with history preservation |
| 100 | +- **Attachment support** - images, audio, PDFs (with vision/text fallback) |
| 101 | +- **Configurable parameters** - temperature, top_p, etc. synced with server defaults |
| 102 | +- **Dark/light theme** |
| 103 | + |
| 104 | +### Tech Stack |
| 105 | + |
| 106 | +- **SvelteKit** - frontend framework with Svelte 5 runes for reactive state |
| 107 | +- **TailwindCSS** + **shadcn-svelte** - styling and UI components |
| 108 | +- **Vite** - build tooling |
| 109 | +- **IndexedDB** (Dexie) - local storage for conversations |
| 110 | +- **LocalStorage** - user settings persistence |
| 111 | + |
| 112 | +### Architecture |
| 113 | + |
| 114 | +The WebUI follows a layered architecture: |
| 115 | + |
| 116 | +``` |
| 117 | +Routes → Components → Hooks → Stores → Services → Storage/API |
| 118 | +``` |
| 119 | + |
| 120 | +- **Stores** - reactive state management (`chatStore`, `conversationsStore`, `modelsStore`, `serverStore`, `settingsStore`) |
| 121 | +- **Services** - stateless API/database communication (`ChatService`, `ModelsService`, `PropsService`, `DatabaseService`) |
| 122 | +- **Hooks** - reusable logic (`useModelChangeValidation`, `useProcessingState`) |
| 123 | + |
| 124 | +For detailed architecture diagrams, see [`tools/server/webui/docs/`](webui/docs/): |
| 125 | + |
| 126 | +- `high-level-architecture.mmd` - full architecture with all modules |
| 127 | +- `high-level-architecture-simplified.mmd` - simplified overview |
| 128 | +- `data-flow-simplified-model-mode.mmd` - data flow for single-model mode |
| 129 | +- `data-flow-simplified-router-mode.mmd` - data flow for multi-model mode |
| 130 | +- `flows/*.mmd` - detailed per-domain flows (chat, conversations, models, etc.) |
| 131 | + |
| 132 | +### Development |
| 133 | + |
| 134 | +```sh |
| 135 | +# make sure you have Node.js installed |
| 136 | +cd tools/server/webui |
| 137 | +npm i |
| 138 | + |
| 139 | +# run dev server (with hot reload) |
| 140 | +npm run dev |
| 141 | + |
| 142 | +# run tests |
| 143 | +npm run test |
| 144 | + |
| 145 | +# build production bundle |
| 146 | +npm run build |
| 147 | +``` |
| 148 | + |
| 149 | +After `public/index.html.gz` has been generated, rebuild `llama-server` as described in the [build](#build) section to include the updated UI. |
| 150 | + |
| 151 | +**Note:** The Vite dev server automatically proxies API requests to `http://localhost:8080`. Make sure `llama-server` is running on that port during development. |
0 commit comments