Skip to content

Commit fa83f99

Browse files
authored
README: Add setup instructions for Open WebUI (#954)
1 parent e9d9242 commit fa83f99

File tree

1 file changed

+58
-0
lines changed

1 file changed

+58
-0
lines changed

README.org

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ gptel is a simple Large Language Model chat client for Emacs, with support for m
1414
| Anthropic (Claude) | ✓ | [[https://www.anthropic.com/api][API key]] |
1515
| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]] |
1616
| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
17+
| Open WebUI | ✓ | [[https://openwebui.com/][Open WebUI running locally]] |
1718
| Llama.cpp | ✓ | [[https://github.com/ggml-org/llama.cpp/tree/master/tools/server#quick-start][Llama.cpp running locally]] |
1819
| Llamafile | ✓ | [[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile server]] |
1920
| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running locally]] |
@@ -102,6 +103,7 @@ gptel uses Curl if available, but falls back to the built-in url-retrieve to wor
102103
- [[#azure][Azure]]
103104
- [[#gpt4all][GPT4All]]
104105
- [[#ollama][Ollama]]
106+
- [[#open-webui][Open WebUI]]
105107
- [[#gemini][Gemini]]
106108
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
107109
- [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
@@ -335,6 +337,62 @@ The above code makes the backend available to select. If you want it to be the
335337

336338
#+html: </details>
337339

340+
#+html: <details><summary>
341+
**** Open WebUI
342+
#+html: </summary>
343+
344+
[[https://openwebui.com/][Open WebUI]] is an open source, self-hosted system which provides a multi-user web chat interface and an API endpoint for accessing LLMs, especially LLMs running locally on inference servers like Ollama.
345+
346+
Because it presents an OpenAI-compatible endpoint, you use ~gptel-make-openai~ to register it as a backend.
347+
348+
For instance, you can use this form to register a backend for a local instance of Open Web UI served via http on port 3000:
349+
350+
#+begin_src emacs-lisp
351+
(gptel-make-openai "OpenWebUI"
352+
:host "localhost:3000"
353+
:protocol "http"
354+
:key "KEY_FOR_ACCESSING_OPENWEBUI"
355+
:endpoint "/api/chat/completions"
356+
:stream t
357+
:models '("gemma3n:latest"))
358+
#+end_src
359+
360+
Or if you are running Open Web UI on another host on your local network (~box.local~), serving via https with self-signed certificates, this will work:
361+
362+
#+begin_src emacs-lisp
363+
(gptel-make-openai "OpenWebUI"
364+
:host "box.local"
365+
:curl-args '("--insecure") ; needed for self-signed certs
366+
:key "KEY_FOR_ACCESSING_OPENWEBUI"
367+
:endpoint "/api/chat/completions"
368+
:stream t
369+
:models '("gemma3n:latest"))
370+
#+end_src
371+
372+
To find your API key in Open WebUI, click the user name in the bottom left, Settings, Account, and then Show by API Keys section.
373+
374+
Refer to the documentation of =gptel-make-openai= for more configuration options.
375+
376+
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
377+
378+
***** (Optional) Set as the default gptel backend
379+
380+
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.
381+
#+begin_src emacs-lisp
382+
;; OPTIONAL configuration
383+
(setq
384+
gptel-model "gemma3n:latest"
385+
gptel-backend (gptel-make-openai "OpenWebUI"
386+
:host "localhost:3000"
387+
:protocol "http"
388+
:key "KEY_FOR_ACCESSING_OPENWEBUI"
389+
:endpoint "/api/chat/completions"
390+
:stream t
391+
:models '("gemma3n:latest")))
392+
#+end_src
393+
394+
#+html: </details>
395+
338396
#+html: <details><summary>
339397
**** Gemini
340398
#+html: </summary>

0 commit comments

Comments
 (0)