Releases: lmstudio-ai/lmstudio-python
Releases · lmstudio-ai/lmstudio-python
1.6.0b1 - 2025-10-28
What's Changed
- Handle pre-formatted server API errors by @ncoghlan in #165
- API token support by @ryan-the-crayon and @ncoghlan in #163
New Contributors
- @ryan-the-crayon made their first contribution in #163
Full Changelog: 1.5.0...1.6.0b1
1.5.0 - 2025-08-22
What's Changed
- Accept any buffer instance when processing file data (by @ArEnSc; #46)
- Tool definitions now support default parameter values (by @baonudesifeizhai; #90)
- Preliminary addition of Python 3.14 to the CI test matrix (#154). Most features should work without issue, but there is (for now) a known problem with field definitions on
lmstudio.BaseModelsubclasses only being processed correctly iffrom __future__ import annotationsis correct at the time the subclass is defined.
New Contributors
- @ArEnSc made their first contribution in #148
- @baonudesifeizhai made their first contribution in #150
Full Changelog: 1.5.0b1...1.5.0
1.5.0b1
What's Changed
- When no API host is specified, the SDK now attempts to connect to the always-on LM Studio native API server ports, rather than relying on the optional HTTP REST API server being enabled on its default port (#142)
- The new methods
Client.is_valid_api_hostandAsyncClient.is_valid_api_hostallow API host validity to be checked without creating a client instance (part of #142) - The new methods
Client.find_default_local_api_hostandAsyncClient.find_default_local_api_hostallow discovery of a running local API host API server without creating a client instance (part of #142) - Exceptions are now more consistently raised on websocket failures (#121). Previously clients could potentially be left hanging indefinitely if the websocket connection failed while waiting for a response.
- Updated to 2025-07-30 (release 54) lmstudio.js protocol schema (#138)
- The low level session APIs are now more explicitly private (in both the synchronous and asynchronous APIs)
Changes specific to the asynchronous API
- The asynchronous API is now considered stable and no longer emits
FutureWarningwhen imported (#127 and supporting PRs) - Asynchronous model handles now provide a
model.act()API (#132) - The asynchronous
model.act()API supports asynchronous tool implementations (#134)
Changes specific to the synchronous API
- The synchronous API now implements a configurable timeout (#124). The default API timeout if no messages are received from the server on a given response channel is now 60 seconds.
- The new functions
get_sync_api_timeout()andset_sync_api_timeout(timeout: float|None)allow this timeout to be queried and updated (part of #124). Setting the timeout toNonerestores the previous behaviour of blocking indefinitely waiting for server responses. - The synchronous API now supports invocation from
atexithooks (#123, #125)
Full Changelog: 1.4.1...1.5.0b1
1.4.1
What's Changed
- Fix handling of multi-part tool results in
Chat.appendandChat.add_entry(#112) - Fix server mapping for llama quantization config settings (#111)
- More lenient handling of received config fields with omitted default values (#110)
Full Changelog: 1.4.0...1.4.1
1.4.0
1.3.2
1.3.1
1.3.0
What's Changed
- Added a dedicated
configure_default_clientAPI (#82) - Runtime names of public API types now match their import names (#74)
- Tool call failures are now passed back to the LLM by default (#72)
- Synchronous clients now use fewer background threads (#77)
- Tool results now report non-ASCII characters as Unicode code points rather than as ASCII escape sequences (contributed by @jusallcaz in #80)
- The not-yet-implemented general file handling APIs have been removed from the public API (#81)
New Contributors
- @jusallcaz made their first contribution in #80
Full Changelog: 1.2.0...1.3.0
1.2.0 - 2025-03-22
What's Changed
- Align model loading config format with lmstudio-js 47 (previously aligned with lmstudio-js 46)
- Note: users of the experimental
gpuOffloadconfig setting will need to switch to setting the (still experimental)gpufield
- Note: users of the experimental
- Explicitly note in relevant docstrings that details of config fields are not yet formally stabilised
- Pass previously omitted config settings to the server (#51)
- Add speculative decoding example (#55)
- Publish config retrieval APIs (#53, #54)
- Add server side token counting APIs (#57)
- Simplify prediction API type hinting (#59)
- Add preset config support in prediction APIs (#58) (Requires LM Studio 0.14+)
- Add GBNF grammar support when requesting structured responses (#60) (Requires LM Studio 0.14+)
Full Changelog: 1.1.0...1.2.0
1.1.0 - 2025-03-15
What's Changed
- Added SDK versioning policy to
README.md(#42) - Support Python 3.10 (#41)
- Support image input for VLMs (#34)
- Publish file preparation APIs (#37, #39)
- Add image preparation APIs (#38, #39)
- Avoid corrupting snake_case keys in structured output schemas supplied via prediction config dicts (#43)
Full Changelog: 1.0.1...1.1.0