|
1 | | -# 🦙 llama.ui - Minimal Interface for Local LLMs |
| 1 | +# 🦙 llama.ui - Minimal Interface for Local AI Companion ✨ |
2 | 2 |
|
3 | | -`llama.ui` is an open-source desktop application that provides a beautiful, user-friendly interface for interacting with large language models (LLMs) powered by `llama.cpp`. Designed for simplicity and performance, this project enables seamless deployment and interaction with quantized models on your local machine without compromising privacy. |
| 3 | +**Tired of complex AI setups?** 😩 `llama.ui` is an open-source desktop application that provides a beautiful ✨, user-friendly interface for interacting with large language models (LLMs) powered by `llama.cpp`. Designed for simplicity and privacy 🔒, this project lets you chat with powerful quantized models on your local machine - no cloud required! 🚫☁️ |
4 | 4 |
|
5 | | -## TL;DR |
| 5 | +## ⚡ TL;DR |
6 | 6 |
|
7 | | -This repository is a fork of [llama.cpp](https://github.com/ggml-org/llama.cpp) WebUI with changed styles, extra functionality, etc. |
| 7 | +This repository is a fork of [llama.cpp](https://github.com/ggml-org/llama.cpp) WebUI with: |
8 | 8 |
|
9 | | -## How to use |
| 9 | +- Fresh new styles 🎨 |
| 10 | +- Extra functionality ⚙️ |
| 11 | +- Smoother experience ✨ |
10 | 12 |
|
11 | | -### Standalone |
| 13 | +## 🚀 Getting Started in 60 Seconds! |
12 | 14 |
|
13 | | -1. Open the [hosted UI instance](https://olegshulyakov.github.io/llama.ui/). |
14 | | -2. Go to Settings -> General (click the ⚙️ (gear) icon in the UI). |
15 | | -3. Set "Base URL" parameter to your LLM provider or your local llama.cpp server address (e.g. `http://localhost:8080`). |
| 15 | +### 💻 Standalone Mode (Zero Installation) |
16 | 16 |
|
17 | | -<details><summary><b>Example usage w/ local llama.cpp instance</b></summary> |
| 17 | +1. ✨ Open our [hosted UI instance](https://olegshulyakov.github.io/llama.ui/) |
| 18 | +2. ⚙️ Click the gear icon → General settings |
| 19 | +3. 🌐 Set "Base URL" to your local llama.cpp server (e.g. `http://localhost:8080`) |
| 20 | +4. 🎉 Start chatting with your AI! |
| 21 | + |
| 22 | +<details><summary><b>🔧 Need HTTPS magic for your local instance? Try this mitmproxy hack!</b></summary> |
18 | 23 | <p> |
19 | 24 |
|
20 | | -Since browsers prevent using HTTP requests on HTTPS sites, but `llama.cpp` serving on HTTP, there is a life hack how to handle it: |
21 | | -You will need to use a proxy to redirect requests HTTPS <--> HTTP. As an example, we will use [mitmproxy](https://www.mitmproxy.org/). |
| 25 | +**Uh-oh!** Browsers block HTTP requests from HTTPS sites 😤. Since `llama.cpp` uses HTTP, we need a bridge 🌉. Enter [mitmproxy](https://www.mitmproxy.org/) - our traffic wizard! 🧙♂️ |
| 26 | + |
| 27 | +**Local setup:** |
| 28 | + |
| 29 | +```bash |
| 30 | +mitmdump -p 8443 --mode reverse:http://localhost:8080/ |
| 31 | +``` |
22 | 32 |
|
23 | | -You can setup it and run locally `mitmdump -p 8443 --mode reverse:http://localhost:8080/`. |
| 33 | +**Docker quickstart:** |
24 | 34 |
|
25 | | -Or via Docker `docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/` |
| 35 | +```bash |
| 36 | +docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/ |
| 37 | +``` |
26 | 38 |
|
27 | | -Here is Docker Compose example: |
| 39 | +**Pro-tip with Docker Compose:** |
28 | 40 |
|
29 | 41 | ```yml |
30 | 42 | services: |
31 | 43 | mitmproxy: |
32 | 44 | container_name: mitmproxy |
33 | 45 | image: mitmproxy/mitmproxy:latest |
34 | | - restart: on-failure:2 |
35 | | - mem_limit: 256m |
36 | | - security_opt: |
37 | | - - no-new-privileges:true |
38 | 46 | ports: |
39 | | - - '8443:8443' |
40 | | - volumes: |
41 | | - - ./:/home/mitmproxy/.mitmproxy |
42 | | - tty: true |
| 47 | + - '8443:8443' # 🔁 Port magic happening here! |
43 | 48 | command: mitmdump -p 8443 --mode reverse:http://localhost:8080/ |
| 49 | + # ... (other config) |
44 | 50 | ``` |
45 | 51 |
|
46 | | -> **Important**: When using mitmproxy, you'll need to install its CA certificate in your browser to avoid HTTPS warnings. |
47 | | -> You will need to add **mitmproxy** self-signed certificated to trusted. |
48 | | -> Visit http://localhost:8443/ and click "Trust this certificate". |
| 52 | +> ⚠️ **Certificate Tango Time!** |
| 53 | +> |
| 54 | +> 1. Visit http://localhost:8443 |
| 55 | +> 2. Click "Trust this certificate" 🤝 |
| 56 | +> 3. Restart 🦙 llama.ui page 🔄 |
| 57 | +> 4. Profit! 💸 |
49 | 58 |
|
50 | | -Now you are ready to go! 🥳 |
| 59 | +**Voilà!** You've hacked the HTTPS barrier! 🎩✨ |
51 | 60 |
|
52 | 61 | </p> |
53 | 62 | </details> |
54 | 63 |
|
55 | | -### Llama.cpp WebUI |
| 64 | +### 🖥️ Full Local Installation (Power User Edition) |
| 65 | + |
| 66 | +1. 📦 Grab the latest release from our [releases page](https://github.com/olegshulyakov/llama.ui/releases) |
| 67 | +2. 🗜️ Unpack the archive (feel that excitement! 🤩) |
| 68 | +3. ⚡ Fire up your llama.cpp server: |
56 | 69 |
|
57 | | -1. Download latest version archive from the [release page](https://github.com/olegshulyakov/llama.ui/releases). |
58 | | -2. Extract the archive. |
59 | | -3. Setup llama.cpp web ui: |
| 70 | +**Linux/MacOS:** |
60 | 71 |
|
61 | | -**Linux/MacOS users:** |
62 | | -```sh |
63 | | -$ llama-server \ |
64 | | - --host 0.0.0.0 \ |
65 | | - --port 8080 \ |
66 | | - --path "/path/to/unpacked/llama.ui" \ |
67 | | - -m models/llama-2-7b.Q4_0.gguf |
| 72 | +```bash |
| 73 | +./server --host 0.0.0.0 \ |
| 74 | + --port 8080 \ |
| 75 | + --path "/path/to/llama.ui" \ |
| 76 | + -m models/llama-2-7b.Q4_0.gguf \ |
| 77 | + --ctx-size 4096 |
68 | 78 | ``` |
69 | 79 |
|
70 | | -**Windows Users**: |
| 80 | +**Windows:** |
| 81 | + |
71 | 82 | ```bat |
72 | | -$ llama-server ^ |
73 | | - --host 0.0.0.0 ^ |
74 | | - --port 8080 ^ |
75 | | - --path "C:\path\to\unpacked\llama.ui" ^ |
76 | | - -m models/llama-2-7b.Q4_0.gguf |
| 83 | +llama-server ^ |
| 84 | + --host 0.0.0.0 ^ |
| 85 | + --port 8080 ^ |
| 86 | + --path "C:\path\to\llama.ui" ^ |
| 87 | + -m models\mistral-7b.Q4_K_M.gguf ^ |
| 88 | + --ctx-size 4096 |
77 | 89 | ``` |
78 | 90 |
|
79 | | -4. Access at http://localhost:8080 |
| 91 | +4. 🌐 Visit http://localhost:8080 and meet your new AI buddy! 🤖❤️ |
80 | 92 |
|
81 | | -## 🤝 Contributing |
| 93 | +## 🌟 Join Our Awesome Community! |
82 | 94 |
|
83 | | -- PRs are welcome! |
84 | | -- Any help with managing issues, PRs and projects is very appreciated! |
85 | | -- Make sure commit messages follow [Conventional Commits](https://www.conventionalcommits.org) format. |
| 95 | +**We're building something special together!** 🚀 |
86 | 96 |
|
87 | | -### Development Setup |
| 97 | +- 🎯 **PRs are welcome!** (Seriously, we high-five every contribution! ✋) |
| 98 | +- 🐛 **Bug squashing?** Yes please! 🧯 |
| 99 | +- 📚 **Documentation heroes** needed! 🦸 |
| 100 | +- ✨ **Make magic** with your commits! (Follow [Conventional Commits](https://www.conventionalcommits.org)) |
88 | 101 |
|
89 | | -#### Prerequisites |
| 102 | +### 🛠️ Developer Wonderland |
90 | 103 |
|
91 | | -- macOS, Windows, or Linux |
92 | | -- Node.js >= 22 |
93 | | -- Local [llama.cpp server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) running |
| 104 | +**Prerequisites:** |
94 | 105 |
|
95 | | -```bash |
96 | | -# Install dependencies |
97 | | -npm ci |
| 106 | +- 💻 macOS/Windows/Linux |
| 107 | +- ⬢ Node.js >= 22 |
| 108 | +- 🦙 Local [llama.cpp server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) humming along |
98 | 109 |
|
99 | | -# Perform first build |
100 | | -npm run build |
| 110 | +**Build the future:** |
101 | 111 |
|
102 | | -# Start development server (accessible at http://localhost:5173) |
103 | | -npm start |
| 112 | +```bash |
| 113 | +npm ci # 📦 Grab dependencies |
| 114 | +npm run build # 🔨 Craft the magic |
| 115 | +npm start # 🎬 Launch dev server (http://localhost:5173) for live-coding bliss! 🔥 |
104 | 116 | ``` |
105 | 117 |
|
106 | | -## 📜 License |
| 118 | +## 📜 License - Freedom First! |
107 | 119 |
|
108 | | -llama.ui is released under the **MIT**. See [LICENSE](LICENSE) for details. |
| 120 | +llama.ui is proudly **MIT licensed** - go build amazing things! 🚀 See [LICENSE](LICENSE) for details. |
| 121 | + |
| 122 | +--- |
| 123 | + |
| 124 | +<p align="center"> |
| 125 | +Made with ❤️ and ☕ by humans who believe in private AI |
| 126 | +</p> |
0 commit comments