Skip to content

Commit 7ca2dcc

Browse files
authored
chore(typing): replace pyright with ty (#1978)
1 parent 1f62106 commit 7ca2dcc

File tree

16 files changed

+155
-77
lines changed

16 files changed

+155
-77
lines changed

.github/workflows/ruff.yml

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ on:
77

88
env:
99
WORKING_DIRECTORY: "."
10-
RUFF_OUTPUT_FILENAME: "ruff.log"
11-
CUSTOM_FLAGS: ""
1210
CUSTOM_PACKAGES: "instructor examples tests"
1311

1412
jobs:
@@ -25,10 +23,7 @@ jobs:
2523
run: uv python install 3.9
2624
- name: Install the project
2725
run: uv sync --all-extras
28-
- name: Run Continuous Integration Action
29-
uses: astral-sh/ruff-action@v3
30-
- name: Upload Artifacts
31-
uses: actions/upload-artifact@v4
32-
with:
33-
name: ruff-log
34-
path: ${{ env.WORKING_DIRECTORY }}/${{ env.RUFF_OUTPUT_FILENAME }}
26+
- name: Ruff lint
27+
run: uv run ruff check ${{ env.CUSTOM_PACKAGES }}
28+
- name: Ruff format
29+
run: uv run ruff format --check ${{ env.CUSTOM_PACKAGES }}
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
name: Type Check
1+
name: ty
22

33
on:
44
pull_request:

.pre-commit-config.yaml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,3 +33,11 @@ repos:
3333
language: system
3434
files: ^pyproject\.toml$
3535
pass_filenames: false
36+
37+
- id: ty-check
38+
name: Run Type Check (ty)
39+
entry: uv
40+
args: [run, ty, check]
41+
language: system
42+
files: ^instructor/
43+
pass_filenames: false

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ We encourage contributions to our evaluation tests:
251251
We use automated tools to maintain consistent code style:
252252

253253
- **Ruff**: For linting and formatting
254-
- **PyRight**: For type checking
254+
- **ty**: For type checking
255255
- **Black**: For code formatting (enforced by Ruff)
256256

257257
General guidelines:
@@ -396,4 +396,4 @@ For more details, see our Cursor rules in `.cursor/rules/`.
396396

397397
## License
398398

399-
By contributing to Instructor, you agree that your contributions will be licensed under the project's MIT License.
399+
By contributing to Instructor, you agree that your contributions will be licensed under the project's MIT License.

docs/blog/posts/migrating-to-uv.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ In general I'd say that we saw a ~3x speedup with approximately 67% reduction in
3030
| Job | Time (Poetry) | Time (UV) |
3131
| ---------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
3232
| Ruff Formatting | [1m16s](https://github.com/instructor-ai/instructor/actions/runs/12386936314) | [28s](https://github.com/instructor-ai/instructor/actions/runs/12501982235) (-63%) |
33-
| Pyright | [3m3s](https://github.com/instructor-ai/instructor/actions/runs/12488572568) | [39s](https://github.com/instructor-ai/instructor/actions/runs/12501974285) (-79%) |
33+
| Type checking | [3m3s](https://github.com/instructor-ai/instructor/actions/runs/12488572568) | [39s](https://github.com/instructor-ai/instructor/actions/runs/12501974285) (-79%) |
3434
| Test Python 3.9 | [1m21s](https://github.com/instructor-ai/instructor/actions/runs/12251767751/job/34177033359) | [32s](https://github.com/instructor-ai/instructor/actions/runs/12501974279/job/34880278051) (-61%) |
3535
| Test Python 3.10 | [1m32s](https://github.com/instructor-ai/instructor/actions/runs/12251767751/job/34177033359) | [33s](https://github.com/instructor-ai/instructor/actions/runs/12501974279/job/34880278299) (-64%) |
3636
| Test Python 3.11 | [3m19](https://github.com/instructor-ai/instructor/actions/runs/12251767751/job/34177034094) | [2m48s](https://github.com/instructor-ai/instructor/actions/runs/12501974279/job/34880278480) (-16%) |

docs/contributing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ For detailed documentation on each script, see the `scripts/README.md` file in t
295295
We use the following tools to maintain code quality:
296296

297297
- **Ruff**: For linting and formatting
298-
- **PyRight**: For type checking
298+
- **ty**: For type checking
299299
- **Pre-commit**: For automatic checks before committing
300300

301301
```bash

instructor/auto_client.py

Lines changed: 56 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -158,32 +158,69 @@ def from_provider(
158158
if provider == "openai":
159159
try:
160160
import openai
161+
import httpx
161162
from instructor import from_openai # type: ignore[attr-defined]
163+
from openai import DEFAULT_MAX_RETRIES, NotGiven, Timeout, not_given
164+
from collections.abc import Mapping
165+
from typing import cast
162166

163167
# Extract base_url and other OpenAI client parameters from kwargs
164168
base_url = kwargs.pop("base_url", None)
165-
openai_client_kwargs = {}
166-
for key in (
167-
"organization",
168-
"timeout",
169-
"max_retries",
170-
"default_headers",
171-
"http_client",
172-
"app_info",
173-
):
174-
if key in kwargs:
175-
openai_client_kwargs[key] = kwargs.pop(key)
169+
organization = cast(str | None, kwargs.pop("organization", None))
176170

177-
# Build client kwargs, including base_url if provided
178-
client_kwargs = {"api_key": api_key, **openai_client_kwargs}
179-
if base_url is not None:
180-
client_kwargs["base_url"] = base_url
171+
timeout_raw = kwargs.pop("timeout", not_given)
172+
timeout: float | Timeout | None | NotGiven
173+
timeout = (
174+
not_given
175+
if timeout_raw is not_given
176+
else cast(float | Timeout | None, timeout_raw)
177+
)
181178

182-
client = (
183-
openai.AsyncOpenAI(**client_kwargs)
184-
if async_client
185-
else openai.OpenAI(**client_kwargs)
179+
max_retries_raw = kwargs.pop("max_retries", None)
180+
max_retries = (
181+
DEFAULT_MAX_RETRIES
182+
if max_retries_raw is None
183+
else int(cast(int, max_retries_raw))
184+
)
185+
186+
default_headers = cast(
187+
Mapping[str, str] | None, kwargs.pop("default_headers", None)
188+
)
189+
default_query = cast(
190+
Mapping[str, object] | None, kwargs.pop("default_query", None)
186191
)
192+
http_client_raw = kwargs.pop("http_client", None)
193+
strict_response_validation = bool(
194+
kwargs.pop("_strict_response_validation", False)
195+
)
196+
197+
if async_client:
198+
http_client = cast(httpx.AsyncClient | None, http_client_raw)
199+
client = openai.AsyncOpenAI(
200+
api_key=api_key,
201+
base_url=base_url,
202+
organization=organization,
203+
timeout=timeout,
204+
max_retries=max_retries,
205+
default_headers=default_headers,
206+
default_query=default_query,
207+
http_client=http_client,
208+
_strict_response_validation=strict_response_validation,
209+
)
210+
else:
211+
http_client = cast(httpx.Client | None, http_client_raw)
212+
client = openai.OpenAI(
213+
api_key=api_key,
214+
base_url=base_url,
215+
organization=organization,
216+
timeout=timeout,
217+
max_retries=max_retries,
218+
default_headers=default_headers,
219+
default_query=default_query,
220+
http_client=http_client,
221+
_strict_response_validation=strict_response_validation,
222+
)
223+
187224
result = from_openai(
188225
client,
189226
model=model_name,

instructor/processing/schema.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
import functools
1111
import warnings
12-
from typing import Any
12+
from typing import Any, cast
1313

1414
from docstring_parser import parse
1515
from pydantic import BaseModel
@@ -112,7 +112,9 @@ def generate_gemini_schema(model: type[BaseModel]) -> Any:
112112
)
113113

114114
try:
115-
import google.generativeai.types as genai_types
115+
import importlib
116+
117+
genai_types = cast(Any, importlib.import_module("google.generativeai.types"))
116118

117119
# Use OpenAI schema
118120
openai_schema = generate_openai_schema(model)

instructor/providers/gemini/utils.py

Lines changed: 43 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -236,6 +236,42 @@ def transform_schema_node(node: Any) -> Any:
236236
return FunctionSchema(**schema).model_dump(exclude_none=True, exclude_unset=True)
237237

238238

239+
if TYPE_CHECKING:
240+
from google.genai import types as genai_types
241+
242+
243+
def map_to_genai_schema(obj: dict[str, Any]) -> genai_types.Schema:
244+
from google.genai import types
245+
246+
schema = map_to_gemini_function_schema(obj)
247+
248+
def normalize(node: Any) -> Any:
249+
if isinstance(node, list):
250+
return [normalize(item) for item in node]
251+
252+
if not isinstance(node, dict):
253+
return node
254+
255+
key_map = {
256+
"anyOf": "any_of",
257+
"$ref": "ref",
258+
"$defs": "defs",
259+
"maxItems": "max_items",
260+
"minItems": "min_items",
261+
"maxLength": "max_length",
262+
"minLength": "min_length",
263+
"maxProperties": "max_properties",
264+
"minProperties": "min_properties",
265+
}
266+
267+
normalized: dict[str, Any] = {}
268+
for key, value in node.items():
269+
normalized[key_map.get(key, key)] = normalize(value)
270+
return normalized
271+
272+
return types.Schema.model_validate(normalize(schema))
273+
274+
239275
def update_genai_kwargs(
240276
kwargs: dict[str, Any], base_config: dict[str, Any]
241277
) -> dict[str, Any]:
@@ -583,7 +619,7 @@ def reask_vertexai_tools(
583619
Kwargs modifications:
584620
- Adds: "contents" (tool response messages indicating validation errors)
585621
"""
586-
from instructor.client_vertexai import vertexai_function_response_parser
622+
from ..vertexai.client import vertexai_function_response_parser
587623

588624
kwargs = kwargs.copy()
589625
reask_msgs = [
@@ -605,7 +641,7 @@ def reask_vertexai_json(
605641
Kwargs modifications:
606642
- Adds: "contents" (user message requesting JSON correction)
607643
"""
608-
from instructor.client_vertexai import vertexai_message_parser
644+
from ..vertexai.client import vertexai_message_parser
609645

610646
kwargs = kwargs.copy()
611647

@@ -931,7 +967,7 @@ def handle_genai_tools(
931967
if "thinking_config" not in new_kwargs and user_thinking_config is not None:
932968
new_kwargs["thinking_config"] = user_thinking_config
933969

934-
schema = map_to_gemini_function_schema(_get_model_schema(response_model))
970+
schema = map_to_genai_schema(_get_model_schema(response_model))
935971
function_definition = types.FunctionDeclaration(
936972
name=_get_model_name(response_model),
937973
description=getattr(response_model, "__doc__", None),
@@ -951,7 +987,8 @@ def handle_genai_tools(
951987
"tools": [types.Tool(function_declarations=[function_definition])],
952988
"tool_config": types.ToolConfig(
953989
function_calling_config=types.FunctionCallingConfig(
954-
mode="ANY", allowed_function_names=[_get_model_name(response_model)]
990+
mode=types.FunctionCallingConfigMode.ANY,
991+
allowed_function_names=[_get_model_name(response_model)],
955992
),
956993
),
957994
}
@@ -989,7 +1026,7 @@ def handle_vertexai_parallel_tools(
9891026
"""
9901027
from typing import get_args
9911028

992-
from instructor.client_vertexai import vertexai_process_response
1029+
from ..vertexai.client import vertexai_process_response
9931030
from instructor.dsl.parallel import VertexAIParallelModel
9941031

9951032
if new_kwargs.get("stream", False):
@@ -1010,7 +1047,7 @@ def handle_vertexai_parallel_tools(
10101047
def handle_vertexai_tools(
10111048
response_model: type[Any] | None, new_kwargs: dict[str, Any]
10121049
) -> tuple[type[Any] | None, dict[str, Any]]:
1013-
from instructor.client_vertexai import vertexai_process_response
1050+
from ..vertexai.client import vertexai_process_response
10141051

10151052
"""
10161053
Handle Vertex AI tools mode.

instructor/providers/mistral/client.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def from_mistral(
5656
if use_async:
5757

5858
async def async_wrapper(
59-
*args: Any, **kwargs: dict[str, Any]
59+
*args: Any, **kwargs: Any
6060
): # Handler for async streaming
6161
if kwargs.pop("stream", False):
6262
return await client.chat.stream_async(*args, **kwargs)
@@ -70,9 +70,7 @@ async def async_wrapper(
7070
**kwargs,
7171
)
7272

73-
def sync_wrapper(
74-
*args: Any, **kwargs: dict[str, Any]
75-
): # Handler for sync streaming
73+
def sync_wrapper(*args: Any, **kwargs: Any): # Handler for sync streaming
7674
if kwargs.pop("stream", False):
7775
return client.chat.stream(*args, **kwargs)
7876
return client.chat.complete(*args, **kwargs)

0 commit comments

Comments
 (0)