Skip to content

Commit 81d3086

Browse files
committed
Add checks
1 parent a7ff01d commit 81d3086

File tree

2 files changed

+28
-2
lines changed

2 files changed

+28
-2
lines changed

MyApp/_posts/2025-10-01_llms-py-ui.md

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ image: ./img/posts/llms-py-ui/bg.webp
99
[llms.py](https://github.com/ServiceStack/llms) is a lightweight OSS CLI, API and ChatGPT-like alternative to Open WebUI
1010
for accessing multiple LLMs that still only requires 1 (aiohttp) dependency, entirely offline, with all data kept private in browser storage.
1111

12-
## v2.0.19
12+
## v2.0.24
1313

1414
### Metrics and Analytics
1515

16-
We're happy to announce the next major release of **llms.py v2.0.19** now includes API pricing for all premium LLMs,
16+
We're happy to announce the next major release of **llms.py v2.0.24** now includes API pricing for all premium LLMs,
1717
observability with detailed usage and metric insights, so you're better able to analyze and track your
1818
spend within the UI.
1919

@@ -63,6 +63,32 @@ Activity Logs are maintained independently of the Chat History so you can clear
6363
without losing the detailed Activity Logs of your AI requests. Likewise you can delete Activity Logs
6464
without losing your Chat History.
6565

66+
### Check Providers
67+
68+
Another feature added in this release is the ability to check the status of all configured providers to test if they're
69+
reachable and configured correctly and their response times for the simplest `1+1=` request:
70+
71+
Check all models for a provider:
72+
73+
:::sh
74+
llms --check groq
75+
:::
76+
77+
Check specific models for a provider:
78+
79+
:::sh
80+
llms --check groq kimi-k2 llama4:400b gpt-oss:120b
81+
:::
82+
83+
:::{.wideshot}
84+
[![llms-check.webp](/img/posts/llms-py-ui/llms-check.webp)](/img/posts/llms-py-ui/llms-check.webp)
85+
:::
86+
87+
As they're a good indicator for the reliability and speed you can expect from different providers we've created a
88+
[test-providers.yml](https://github.com/ServiceStack/llms/actions/workflows/test-providers.yml) GitHub Action to
89+
test the response times for all configured providers and models, the results of which will be frequently published to
90+
[/checks/latest.txt](https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt)
91+
6692
## ChatGPT, but Local 🎯
6793

6894
In keeping with the simplicity goals of [llms.py](https://github.com/ServiceStack/llms), its [/ui](https://github.com/ServiceStack/llms/tree/main/llms/ui)
117 KB
Loading

0 commit comments

Comments
 (0)