Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# This workflow will upload a Python Package to PyPI when a release is created
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a publish.yml workflow, no need for this one.

# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python#publishing-to-package-registries

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: Upload Python Package

on:
release:
types: [published]

permissions:
contents: read

jobs:
release-build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- uses: actions/setup-python@v5
with:
python-version: "3.x"

- name: Build release distributions
run: |
# NOTE: put your own distribution build steps here.
python -m pip install build
python -m build
- name: Upload distributions
uses: actions/upload-artifact@v4
with:
name: release-dists
path: dist/

pypi-publish:
runs-on: ubuntu-latest
needs:
- release-build
permissions:
# IMPORTANT: this permission is mandatory for trusted publishing
id-token: write

# Dedicated environments with protections for publishing are strongly recommended.
# For more information, see: https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#deployment-protection-rules
environment:
name: pypi
# OPTIONAL: uncomment and update to include your PyPI project URL in the deployment status:
# url: https://pypi.org/p/YOURPROJECT
#
# ALTERNATIVE: if your GitHub Release name is the PyPI project version string
# ALTERNATIVE: exactly, uncomment the following line instead:
# url: https://pypi.org/project/YOURPROJECT/${{ github.event.release.name }}

steps:
- name: Retrieve release distributions
uses: actions/download-artifact@v4
with:
name: release-dists
path: dist/

- name: Publish release distributions to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: dist/
10 changes: 10 additions & 0 deletions .vscode/tasks.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Editor specific config shouldn't be added to the repo.

(I have .vscode in my user level ~/.gitignore, but perhaps we should add it to the repo level one as well).

"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "Run chatbot example",
"command": "python3 examples/chatbot.py"
}
]
}
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,8 @@ addopts = "--strict-markers"
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"lmstudio: marks tests as needing LM Studio (deselect with '-m \"not lmstudio\"')",
"wip: marks tests as a work-in-progress (select with '-m \"wip\"')"
"wip: marks tests as a work-in-progress (select with '-m \"wip\"')",
"asyncio: marks tests as asyncio-based",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"asyncio: marks tests as asyncio-based",

pytest.mark.asyncio is considered a standard marker (as long as pytest-asyncio is installed in the test execution environment), so it's allowed without needing to be explicitly listed even when strict markers are enabled.

]
# Warnings should only be emitted when being specifically tested
filterwarnings = [
Expand All @@ -102,7 +103,6 @@ filterwarnings = [
log_format = "%(asctime)s %(levelname)s %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
# Each async test case gets a fresh event loop by default
asyncio_default_fixture_loop_scope = "function"

[tool.coverage.run]
relative_files = true
Expand Down
8 changes: 8 additions & 0 deletions src/lmstudio/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,14 @@
from .schemas import *
from .history import *
from .json_api import *
from .json_api import (
LMStudioPredictionError,
LMStudioModelLoadError,
LMStudioInputValidationError,
LMStudioPredictionTimeoutError,
LMStudioPredictionCancelledError,
LMStudioPredictionRuntimeError,
)
Comment on lines +20 to +27
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
from .json_api import (
LMStudioPredictionError,
LMStudioModelLoadError,
LMStudioInputValidationError,
LMStudioPredictionTimeoutError,
LMStudioPredictionCancelledError,
LMStudioPredictionRuntimeError,
)

Exporting symbols at the top level is handled by listing them in the relevant __all__ list (the one in json_api in this case)

from .async_api import *
from .sync_api import *

Expand Down
22 changes: 22 additions & 0 deletions src/lmstudio/json_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -418,10 +418,32 @@ def __init__(self, message: str) -> None:
super().__init__(message, None)


@sdk_public_type

Comment on lines +421 to +422
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@sdk_public_type

Duplicated decorator

@sdk_public_type
class LMStudioPredictionError(LMStudioServerError):
"""Problems reported by the LM Studio instance during a model prediction."""

@sdk_public_type
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more granular subclasses and the session scoped fixture loading would be easier to follow if placed in different PRs.

class LMStudioModelLoadError(LMStudioPredictionError):
"""Raised when a model fails to load for a prediction."""

@sdk_public_type
class LMStudioInputValidationError(LMStudioPredictionError):
"""Raised when input to a prediction is invalid (e.g., bad prompt, bad parameters)."""

@sdk_public_type
class LMStudioPredictionTimeoutError(LMStudioPredictionError):
"""Raised when a prediction times out before completion."""

@sdk_public_type
class LMStudioPredictionCancelledError(LMStudioPredictionError):
"""Raised when a prediction is cancelled before completion."""

@sdk_public_type
class LMStudioPredictionRuntimeError(LMStudioPredictionError):
"""Raised for unexpected runtime errors during prediction."""


@sdk_public_type
class LMStudioClientError(LMStudioError):
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_embedding_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

from lmstudio import AsyncClient, EmbeddingLoadModelConfig, LMStudioModelNotFoundError

from ..support import (
from tests.support import (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of relative imports is intentional and shouldn't be changed.

EXPECTED_EMBEDDING,
EXPECTED_EMBEDDING_CONTEXT_LENGTH,
EXPECTED_EMBEDDING_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_images_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

from lmstudio import AsyncClient, Chat, FileHandle, LMStudioServerError

from ..support import (
from tests.support import (
EXPECTED_VLM_ID,
IMAGE_FILEPATH,
SHORT_PREDICTION_CONFIG,
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_inference_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
ToolCallRequest,
)

from ..support import (
from tests.support import (
ADDITION_TOOL_SPEC,
EXPECTED_LLM_ID,
GBNF_GRAMMAR,
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_llm_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
history,
)

from ..support import EXPECTED_LLM, EXPECTED_LLM_ID, check_sdk_error
from tests.support import EXPECTED_LLM, EXPECTED_LLM_ID, check_sdk_error


@pytest.mark.asyncio
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_model_catalog_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from lmstudio import AsyncClient, LMStudioModelNotFoundError, LMStudioServerError
from lmstudio.json_api import DownloadedModelBase, ModelHandleBase

from ..support import (
from tests.support import (
LLM_LOAD_CONFIG,
EXPECTED_LLM,
EXPECTED_LLM_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_model_handles_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

from lmstudio import AsyncClient, PredictionResult

from ..support import (
from tests.support import (
EXPECTED_EMBEDDING,
EXPECTED_EMBEDDING_ID,
EXPECTED_EMBEDDING_LENGTH,
Expand Down
2 changes: 1 addition & 1 deletion tests/async/test_repository_async.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from lmstudio import AsyncClient, LMStudioClientError

from ..support import SMALL_LLM_SEARCH_TERM
from tests.support import SMALL_LLM_SEARCH_TERM


# N.B. We can maybe provide a reference list for what should be available
Expand Down
22 changes: 22 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,27 @@

import pytest


# Ensure support module assertions provide failure details
pytest.register_assert_rewrite("tests.support")

# Session-scoped fixture for required model loading
import asyncio
import sys

@pytest.fixture(scope="session", autouse=True)
def load_required_models():
"""Load required models at the start of the test session."""
# Only run if LM Studio is accessible
try:
from tests.load_models import reload_models
asyncio.run(reload_models())
except Exception as e:
print(f"[Fixture] Skipping model loading: {e}", file=sys.stderr)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some environments consider printing anything to stderr to be a test failure in its own right, so ideally we wouldn't even try to load the models when not lmstudio is part of the test marker filtering (or all the other model using test cases are otherwise filtered out).

This suggests that rather than autouse=True, we want something along the lines of the "marker to fixture usage" approach suggested in the this pytest issue comment:

def pytest_itemcollected(item):
    if item.get_closest_marker("lmstudio") is not None:
        item.applymarker(pytest.mark.usefixtures("load_required_models"))

yield
# Optionally unload models at the end of the session
try:
from tests.unload_models import unload_models
asyncio.run(unload_models())
except Exception as e:
print(f"[Fixture] Skipping model unloading: {e}", file=sys.stderr)
2 changes: 1 addition & 1 deletion tests/load_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

import lmstudio as lms

from .support import (
from tests.support import (
EXPECTED_EMBEDDING_ID,
EXPECTED_LLM_ID,
EXPECTED_VLM_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_embedding_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

from lmstudio import Client, EmbeddingLoadModelConfig, LMStudioModelNotFoundError

from ..support import (
from tests.support import (
EXPECTED_EMBEDDING,
EXPECTED_EMBEDDING_CONTEXT_LENGTH,
EXPECTED_EMBEDDING_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_images_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

from lmstudio import Client, Chat, FileHandle, LMStudioServerError

from ..support import (
from tests.support import (
EXPECTED_VLM_ID,
IMAGE_FILEPATH,
SHORT_PREDICTION_CONFIG,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_inference_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
ToolCallRequest,
)

from ..support import (
from tests.support import (
ADDITION_TOOL_SPEC,
EXPECTED_LLM_ID,
GBNF_GRAMMAR,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_llm_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
history,
)

from ..support import EXPECTED_LLM, EXPECTED_LLM_ID, check_sdk_error
from tests.support import EXPECTED_LLM, EXPECTED_LLM_ID, check_sdk_error


@pytest.mark.lmstudio
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_model_catalog_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from lmstudio import Client, LMStudioModelNotFoundError, LMStudioServerError
from lmstudio.json_api import DownloadedModelBase, ModelHandleBase

from ..support import (
from tests.support import (
LLM_LOAD_CONFIG,
EXPECTED_LLM,
EXPECTED_LLM_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_model_handles_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

from lmstudio import Client, PredictionResult

from ..support import (
from tests.support import (
EXPECTED_EMBEDDING,
EXPECTED_EMBEDDING_ID,
EXPECTED_EMBEDDING_LENGTH,
Expand Down
2 changes: 1 addition & 1 deletion tests/sync/test_repository_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

from lmstudio import Client, LMStudioClientError

from ..support import SMALL_LLM_SEARCH_TERM
from tests.support import SMALL_LLM_SEARCH_TERM


# N.B. We can maybe provide a reference list for what should be available
Expand Down
2 changes: 1 addition & 1 deletion tests/test_convenience_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

import pytest

from .support import (
from tests.support import (
EXPECTED_EMBEDDING_ID,
EXPECTED_LLM_ID,
EXPECTED_VLM_ID,
Expand Down
2 changes: 1 addition & 1 deletion tests/test_history.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
ToolCallResultDataDict,
)

from .support import IMAGE_FILEPATH, check_sdk_error
from tests.support import IMAGE_FILEPATH, check_sdk_error

INPUT_ENTRIES: list[DictObject] = [
# Entries with multi-word keys mix snake_case and camelCase
Expand Down
2 changes: 1 addition & 1 deletion tests/test_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
from lmstudio.json_api import ChatResponseEndpoint
from lmstudio._sdk_models import LlmToolParameters

from .support import (
from tests.support import (
ADDITION_TOOL_SPEC,
EXPECTED_LLM_ID,
MAX_PREDICTED_TOKENS,
Expand Down
2 changes: 1 addition & 1 deletion tests/test_logging.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

from lmstudio import AsyncClient

from .support import InvalidEndpoint
from tests.support import InvalidEndpoint


@pytest.mark.asyncio
Expand Down
2 changes: 1 addition & 1 deletion tests/test_schemas.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
ModelSpecifierQueryDict,
)

from .support import EXPECTED_LLM_ID
from tests.support import EXPECTED_LLM_ID


def test_lists_of_lists_rejected() -> None:
Expand Down
4 changes: 2 additions & 2 deletions tests/test_session_errors.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
SyncSessionSystem,
)

from .support import (
from tests.support import (
EXPECT_TB_TRUNCATION,
InvalidEndpoint,
nonresponsive_api_host,
Expand All @@ -31,7 +31,7 @@
check_unfiltered_error,
)

from .support.lmstudio import ErrFunc
from tests.support.lmstudio import ErrFunc


async def check_call_errors_async(session: _AsyncSession) -> None:
Expand Down
2 changes: 1 addition & 1 deletion tests/test_sessions.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
from lmstudio._ws_impl import AsyncTaskManager
from lmstudio._ws_thread import AsyncWebsocketThread

from .support import LOCAL_API_HOST
from tests.support import LOCAL_API_HOST


async def check_connected_async_session(session: _AsyncSession) -> None:
Expand Down
2 changes: 1 addition & 1 deletion tests/test_timeouts.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
)
from lmstudio.sync_api import _DEFAULT_TIMEOUT

from .support import EXPECTED_LLM_ID
from tests.support import EXPECTED_LLM_ID

# Sync only, as async API uses standard async timeout constructs like anyio.move_on_after

Expand Down
Loading
Loading