@@ -9,11 +9,11 @@ image: ./img/posts/llms-py-ui/bg.webp
99[ llms.py] ( https://github.com/ServiceStack/llms ) is a lightweight OSS CLI, API and ChatGPT-like alternative to Open WebUI
1010for accessing multiple LLMs that still only requires 1 (aiohttp) dependency, entirely offline, with all data kept private in browser storage.
1111
12- ## v2.0.19
12+ ## v2.0.24
1313
1414### Metrics and Analytics
1515
16- We're happy to announce the next major release of ** llms.py v2.0.19 ** now includes API pricing for all premium LLMs,
16+ We're happy to announce the next major release of ** llms.py v2.0.24 ** now includes API pricing for all premium LLMs,
1717observability with detailed usage and metric insights, so you're better able to analyze and track your
1818spend within the UI.
1919
@@ -63,6 +63,32 @@ Activity Logs are maintained independently of the Chat History so you can clear
6363without losing the detailed Activity Logs of your AI requests. Likewise you can delete Activity Logs
6464without losing your Chat History.
6565
66+ ### Check Providers
67+
68+ Another feature added in this release is the ability to check the status of all configured providers to test if they're
69+ reachable and configured correctly and their response times for the simplest ` 1+1= ` request:
70+
71+ Check all models for a provider:
72+
73+ ::: sh
74+ llms --check groq
75+ :::
76+
77+ Check specific models for a provider:
78+
79+ ::: sh
80+ llms --check groq kimi-k2 llama4:400b gpt-oss:120b
81+ :::
82+
83+ :::{.wideshot}
84+ [ ![ llms-check.webp] ( /img/posts/llms-py-ui/llms-check.webp )] ( /img/posts/llms-py-ui/llms-check.webp )
85+ :::
86+
87+ As they're a good indicator for the reliability and speed you can expect from different providers we've created a
88+ [ test-providers.yml] ( https://github.com/ServiceStack/llms/actions/workflows/test-providers.yml ) GitHub Action to
89+ test the response times for all configured providers and models, the results of which will be frequently published to
90+ [ /checks/latest.txt] ( https://github.com/ServiceStack/llms/blob/main/docs/checks/latest.txt )
91+
6692## ChatGPT, but Local 🎯
6793
6894In keeping with the simplicity goals of [ llms.py] ( https://github.com/ServiceStack/llms ) , its [ /ui] ( https://github.com/ServiceStack/llms/tree/main/llms/ui )
0 commit comments