Skip to content

Commit 77d496a

Browse files
Update README.md
* Re-written with DeepSeek R1 help.
1 parent 4298a0f commit 77d496a

File tree

1 file changed

+81
-63
lines changed

1 file changed

+81
-63
lines changed

README.md

Lines changed: 81 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -1,108 +1,126 @@
1-
# 🦙 llama.ui - Minimal Interface for Local LLMs
1+
# 🦙 llama.ui - Minimal Interface for Local AI Companion ✨
22

3-
`llama.ui` is an open-source desktop application that provides a beautiful, user-friendly interface for interacting with large language models (LLMs) powered by `llama.cpp`. Designed for simplicity and performance, this project enables seamless deployment and interaction with quantized models on your local machine without compromising privacy.
3+
**Tired of complex AI setups?** 😩 `llama.ui` is an open-source desktop application that provides a beautiful, user-friendly interface for interacting with large language models (LLMs) powered by `llama.cpp`. Designed for simplicity and privacy 🔒, this project lets you chat with powerful quantized models on your local machine - no cloud required! 🚫☁️
44

5-
## TL;DR
5+
## TL;DR
66

7-
This repository is a fork of [llama.cpp](https://github.com/ggml-org/llama.cpp) WebUI with changed styles, extra functionality, etc.
7+
This repository is a fork of [llama.cpp](https://github.com/ggml-org/llama.cpp) WebUI with:
88

9-
## How to use
9+
- Fresh new styles 🎨
10+
- Extra functionality ⚙️
11+
- Smoother experience ✨
1012

11-
### Standalone
13+
## 🚀 Getting Started in 60 Seconds!
1214

13-
1. Open the [hosted UI instance](https://olegshulyakov.github.io/llama.ui/).
14-
2. Go to Settings -> General (click the ⚙️ (gear) icon in the UI).
15-
3. Set "Base URL" parameter to your LLM provider or your local llama.cpp server address (e.g. `http://localhost:8080`).
15+
### 💻 Standalone Mode (Zero Installation)
1616

17-
<details><summary><b>Example usage w/ local llama.cpp instance</b></summary>
17+
1. ✨ Open our [hosted UI instance](https://olegshulyakov.github.io/llama.ui/)
18+
2. ⚙️ Click the gear icon → General settings
19+
3. 🌐 Set "Base URL" to your local llama.cpp server (e.g. `http://localhost:8080`)
20+
4. 🎉 Start chatting with your AI!
21+
22+
<details><summary><b>🔧 Need HTTPS magic for your local instance? Try this mitmproxy hack!</b></summary>
1823
<p>
1924

20-
Since browsers prevent using HTTP requests on HTTPS sites, but `llama.cpp` serving on HTTP, there is a life hack how to handle it:
21-
You will need to use a proxy to redirect requests HTTPS <--> HTTP. As an example, we will use [mitmproxy](https://www.mitmproxy.org/).
25+
**Uh-oh!** Browsers block HTTP requests from HTTPS sites 😤. Since `llama.cpp` uses HTTP, we need a bridge 🌉. Enter [mitmproxy](https://www.mitmproxy.org/) - our traffic wizard! 🧙‍♂️
26+
27+
**Local setup:**
28+
29+
```bash
30+
mitmdump -p 8443 --mode reverse:http://localhost:8080/
31+
```
2232

23-
You can setup it and run locally `mitmdump -p 8443 --mode reverse:http://localhost:8080/`.
33+
**Docker quickstart:**
2434

25-
Or via Docker `docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/`
35+
```bash
36+
docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/
37+
```
2638

27-
Here is Docker Compose example:
39+
**Pro-tip with Docker Compose:**
2840

2941
```yml
3042
services:
3143
mitmproxy:
3244
container_name: mitmproxy
3345
image: mitmproxy/mitmproxy:latest
34-
restart: on-failure:2
35-
mem_limit: 256m
36-
security_opt:
37-
- no-new-privileges:true
3846
ports:
39-
- '8443:8443'
40-
volumes:
41-
- ./:/home/mitmproxy/.mitmproxy
42-
tty: true
47+
- '8443:8443' # 🔁 Port magic happening here!
4348
command: mitmdump -p 8443 --mode reverse:http://localhost:8080/
49+
# ... (other config)
4450
```
4551

46-
> **Important**: When using mitmproxy, you'll need to install its CA certificate in your browser to avoid HTTPS warnings.
47-
> You will need to add **mitmproxy** self-signed certificated to trusted.
48-
> Visit http://localhost:8443/ and click "Trust this certificate".
52+
> ⚠️ **Certificate Tango Time!**
53+
>
54+
> 1. Visit http://localhost:8443
55+
> 2. Click "Trust this certificate" 🤝
56+
> 3. Restart 🦙 llama.ui page 🔄
57+
> 4. Profit! 💸
4958
50-
Now you are ready to go! 🥳
59+
**Voilà!** You've hacked the HTTPS barrier! 🎩✨
5160

5261
</p>
5362
</details>
5463

55-
### Llama.cpp WebUI
64+
### 🖥️ Full Local Installation (Power User Edition)
65+
66+
1. 📦 Grab the latest release from our [releases page](https://github.com/olegshulyakov/llama.ui/releases)
67+
2. 🗜️ Unpack the archive (feel that excitement! 🤩)
68+
3. ⚡ Fire up your llama.cpp server:
5669

57-
1. Download latest version archive from the [release page](https://github.com/olegshulyakov/llama.ui/releases).
58-
2. Extract the archive.
59-
3. Setup llama.cpp web ui:
70+
**Linux/MacOS:**
6071

61-
**Linux/MacOS users:**
62-
```sh
63-
$ llama-server \
64-
--host 0.0.0.0 \
65-
--port 8080 \
66-
--path "/path/to/unpacked/llama.ui" \
67-
-m models/llama-2-7b.Q4_0.gguf
72+
```bash
73+
./server --host 0.0.0.0 \
74+
--port 8080 \
75+
--path "/path/to/llama.ui" \
76+
-m models/llama-2-7b.Q4_0.gguf \
77+
--ctx-size 4096
6878
```
6979

70-
**Windows Users**:
80+
**Windows:**
81+
7182
```bat
72-
$ llama-server ^
73-
--host 0.0.0.0 ^
74-
--port 8080 ^
75-
--path "C:\path\to\unpacked\llama.ui" ^
76-
-m models/llama-2-7b.Q4_0.gguf
83+
llama-server ^
84+
--host 0.0.0.0 ^
85+
--port 8080 ^
86+
--path "C:\path\to\llama.ui" ^
87+
-m models\mistral-7b.Q4_K_M.gguf ^
88+
--ctx-size 4096
7789
```
7890

79-
4. Access at http://localhost:8080
91+
4. 🌐 Visit http://localhost:8080 and meet your new AI buddy! 🤖❤️
8092

81-
## 🤝 Contributing
93+
## 🌟 Join Our Awesome Community!
8294

83-
- PRs are welcome!
84-
- Any help with managing issues, PRs and projects is very appreciated!
85-
- Make sure commit messages follow [Conventional Commits](https://www.conventionalcommits.org) format.
95+
**We're building something special together!** 🚀
8696

87-
### Development Setup
97+
- 🎯 **PRs are welcome!** (Seriously, we high-five every contribution! ✋)
98+
- 🐛 **Bug squashing?** Yes please! 🧯
99+
- 📚 **Documentation heroes** needed! 🦸
100+
-**Make magic** with your commits! (Follow [Conventional Commits](https://www.conventionalcommits.org))
88101

89-
#### Prerequisites
102+
### 🛠️ Developer Wonderland
90103

91-
- macOS, Windows, or Linux
92-
- Node.js >= 22
93-
- Local [llama.cpp server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) running
104+
**Prerequisites:**
94105

95-
```bash
96-
# Install dependencies
97-
npm ci
106+
- 💻 macOS/Windows/Linux
107+
- ⬢ Node.js >= 22
108+
- 🦙 Local [llama.cpp server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) humming along
98109

99-
# Perform first build
100-
npm run build
110+
**Build the future:**
101111

102-
# Start development server (accessible at http://localhost:5173)
103-
npm start
112+
```bash
113+
npm ci # 📦 Grab dependencies
114+
npm run build # 🔨 Craft the magic
115+
npm start # 🎬 Launch dev server (http://localhost:5173) for live-coding bliss! 🔥
104116
```
105117

106-
## 📜 License
118+
## 📜 License - Freedom First!
107119

108-
llama.ui is released under the **MIT**. See [LICENSE](LICENSE) for details.
120+
llama.ui is proudly **MIT licensed** - go build amazing things! 🚀 See [LICENSE](LICENSE) for details.
121+
122+
---
123+
124+
<p align="center">
125+
Made with ❤️ and ☕ by humans who believe in private AI
126+
</p>

0 commit comments

Comments
 (0)