Skip to content

Commit 73d8078

Browse files
committed
Update llms-py UI
1 parent 8e7d77f commit 73d8078

File tree

5 files changed

+69
-16
lines changed

5 files changed

+69
-16
lines changed

MyApp/_posts/2025-10-01_llms-py-ui.md

Lines changed: 69 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,84 @@
11
---
2-
title: llms.py gets a UI 🚀
2+
title: llms.py UI
33
summary: Simple ChatGPT-like UI to access ALL Your LLMs, Locally or Remotely!
44
tags: [llms,ai,python]
55
author: Demis Bellot
66
image: ./img/posts/llms-py-ui/bg.webp
77
---
88

9-
# ChatGPT, but Local 🎯
9+
[llms.py](https://github.com/ServiceStack/llms) is a lightweight OSS CLI, API and ChatGPT-like alternative to Open WebUI
10+
for accessing multiple LLMs that still only requires 1 (aiohttp) dependency, entirely offline, with all data kept private in browser storage.
1011

11-
Simple ChatGPT-like UI to access ALL Your LLMs, Locally or Remotely!
12+
## v2.0.17
1213

13-
In keeping with the simplicity and goals of [llms.py](https://github.com/ServiceStack/llms) the new UI still only
14-
requires its single **aiohttp** python dependency for all client and server features.
14+
### Metrics and Analytics
1515

16-
The [/ui](https://github.com/ServiceStack/llms/tree/main/ui) is small, fast and lightweight and follows the
17-
[Simple Modern JavaScript](https://servicestack.net/posts/javascript) approach of leveraging native JS Modules support
16+
We're happy to announce the next major release of **llms.py v2.0.17** now includes API pricing for all premium LLMs,
17+
observability with detailed usage and metric insights, so you're better able to analyze and track your
18+
spend within the UI.
19+
20+
#### Metrics everywhere
21+
22+
- **Cost per Model** - Model selector now displays the Input and Output cost per 1M token for every premium model.
23+
- **Thread-Level Cost & Token Metrics** - Total cost and token count for every conversation thread, displayed in your history sidebar.
24+
- **Per-Message Token Breakdown** - Each individual message, both from you and the AI, now clearly shows its token count.
25+
- **Thread Summaries** - At the bottom of every new conversation, you'll find a consolidated summary detailing the total cost, tokens consumed (input and output), number of requests, and overall response time for that entire thread.
26+
27+
:::{.wideshot}
28+
[![analytics-messages.webp](/img/posts/llms-py-ui/analytics-messages.webp)](/img/posts/llms-py-ui/analytics-messages.webp)
29+
:::
30+
31+
Screenshot also shows support for new **Edit** and **Redo** features that appears after hovering over any **User**
32+
message, to modify or re-run an existing prompt.
33+
34+
### Monthly Cost Analytics
35+
36+
The **Cost Analytics** page provides a comprehensive overview of the usage costs for every day within a given month.
37+
Clicking on a day will expand it to show a detailed breakdown of the costs that was spent per model and provider.
38+
39+
:::{.wideshot}
40+
[![analytics-costs.webp](/img/posts/llms-py-ui/analytics-costs.webp)](/img/posts/llms-py-ui/analytics-costs.webp)
41+
:::
42+
43+
### Monthly Tokens Analytics
44+
45+
Similarly, the **Tokens Analytics** page provides a comprehensive overview of the token usage for every day within a
46+
given month. Clicking on a day expands it to show a detailed breakdown of the tokens that was spent per model and provider.
47+
48+
:::{.wideshot}
49+
[![analytics-tokens.webp](/img/posts/llms-py-ui/analytics-tokens.webp)](/img/posts/llms-py-ui/analytics-tokens.webp)
50+
:::
51+
52+
### Monthly Activity Log
53+
54+
The **Activity Log** page lets you view the individual requests that make up the daily cost and token usage,
55+
the page itself provides a comprehensive and granular overview of all your AI interactions and requests including:
56+
Model, Provider, Partial Prompt, Input & Output Tokens, Cost, Response Time, Speed, Date & Time.
57+
58+
:::{.wideshot}
59+
[![analytics-activity.webp](/img/posts/llms-py-ui/analytics-activity.webp)](/img/posts/llms-py-ui/analytics-activity.webp)
60+
:::
61+
62+
Activity Logs are maintained independently of the Chat History so you can clear or cleanup your Chat History
63+
without losing the detailed Activity Logs of your AI requests. Likewise you can delete Activity Logs
64+
without losing your Chat History.
65+
66+
## ChatGPT, but Local 🎯
67+
68+
In keeping with the simplicity goals of [llms.py](https://github.com/ServiceStack/llms), its [/ui](https://github.com/ServiceStack/llms/tree/main/llms/ui)
69+
is small and fast by following the
70+
[Simple Modern JavaScript](https://servicestack.net/posts/javascript) approach of leveraging JS Modules support
1871
in Browsers to avoid needing any npm dependencies or build tools.
1972

73+
### Offline, Fast and Private
74+
75+
OSS & Free, no sign ups, no ads, no tracking, etc. All data is stored locally in the browser's IndexedDB that can
76+
be exported and imported to transfer chat histories between different browsers.
77+
78+
A goal for **llms.py** is to only require a single python **aiohttp** dependency for minimal risk of conflicts within
79+
multiple Python Environments, e.g. it's an easy drop-in inside a ComfyUI Custom Node as it doesn't require any
80+
additional deps.
81+
2082
## Install
2183

2284
:::sh
@@ -44,15 +106,6 @@ You can configure which OpenAI compatible providers and models you want to use b
44106
Whilst the [ui.json](https://github.com/ServiceStack/llms/blob/main/llms/ui.json) configuration for the UI is maintained
45107
in `~/.llms/ui.json` where you can configure your preferred system prompts and other defaults.
46108

47-
### Fast, Local and Private
48-
49-
OSS & Free, no sign ups, no ads, no tracking, etc. All data is stored locally in the browser's IndexedDB that can
50-
be exported and imported to transfer chat histories between different browsers.
51-
52-
A goal for **llms.py** is to limit itself to its single python **aiohttp** dependency for minimal risk of
53-
conflicts and friction within multiple Python Environments, e.g. `llms.py` is an easy drop-in inside a
54-
ComfyUI Custom Node since it requires no any additional dependencies.
55-
56109
### Import / Export
57110

58111
All data is stored locally in the browser's IndexedDB which as it is tied to the browser's origin, you'll be
90 KB
Loading
91.8 KB
Loading
46.7 KB
Loading
102 KB
Loading

0 commit comments

Comments
 (0)