Skip to content

Commit e984274

Browse files
committed
Merge branch 'main' of https://github.com/openai/openai-agents-python into feat/draw_graph
2 parents 29e9983 + 951193b commit e984274

20 files changed

+388
-48
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
name: Custom model providers
3+
about: Questions or bugs about using non-OpenAI models
4+
title: ''
5+
labels: bug
6+
assignees: ''
7+
8+
---
9+
10+
### Please read this first
11+
12+
- **Have you read the custom model provider docs, including the 'Common issues' section?** [Model provider docs](https://openai.github.io/openai-agents-python/models/#using-other-llm-providers)
13+
- **Have you searched for related issues?** Others may have faced similar issues.
14+
15+
### Describe the question
16+
A clear and concise description of what the question or bug is.
17+
18+
### Debug information
19+
- Agents SDK version: (e.g. `v0.0.3`)
20+
- Python version (e.g. Python 3.10)
21+
22+
### Repro steps
23+
Ideally provide a minimal python script that can be run to reproduce the issue.
24+
25+
### Expected behavior
26+
A clear and concise description of what you expected to happen.

docs/models.md

Lines changed: 35 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -53,21 +53,41 @@ async def main():
5353

5454
## Using other LLM providers
5555

56-
Many providers also support the OpenAI API format, which means you can pass a `base_url` to the existing OpenAI model implementations and use them easily. `ModelSettings` is used to configure tuning parameters (e.g., temperature, top_p) for the model you select.
56+
You can use other LLM providers in 3 ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
5757

58-
```python
59-
external_client = AsyncOpenAI(
60-
api_key="EXTERNAL_API_KEY",
61-
base_url="https://api.external.com/v1/",
62-
)
58+
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
59+
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
60+
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py).
61+
62+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](tracing.md).
63+
64+
!!! note
65+
66+
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
67+
68+
## Common issues with using other LLM providers
69+
70+
### Tracing client error 401
71+
72+
If you get errors related to tracing, this is because traces are uploaded to OpenAI servers, and you don't have an OpenAI API key. You have three options to resolve this:
73+
74+
1. Disable tracing entirely: [`set_tracing_disabled(True)`][agents.set_tracing_disabled].
75+
2. Set an OpenAI key for tracing: [`set_tracing_export_api_key(...)`][agents.set_tracing_export_api_key]. This API key will only be used for uploading traces, and must be from [platform.openai.com](https://platform.openai.com/).
76+
3. Use a non-OpenAI trace processor. See the [tracing docs](tracing.md#custom-tracing-processors).
77+
78+
### Responses API support
79+
80+
The SDK uses the Responses API by default, but most other LLM providers don't yet support it. You may see 404s or similar issues as a result. To resolve, you have two options:
81+
82+
1. Call [`set_default_openai_api("chat_completions")`][agents.set_default_openai_api]. This works if you are setting `OPENAI_API_KEY` and `OPENAI_BASE_URL` via environment vars.
83+
2. Use [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel]. There are examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/).
84+
85+
### Structured outputs support
86+
87+
Some model providers don't have support for [structured outputs](https://platform.openai.com/docs/guides/structured-outputs). This sometimes results in an error that looks something like this:
6388

64-
spanish_agent = Agent(
65-
name="Spanish agent",
66-
instructions="You only speak Spanish.",
67-
model=OpenAIChatCompletionsModel(
68-
model="EXTERNAL_MODEL_NAME",
69-
openai_client=external_client,
70-
),
71-
model_settings=ModelSettings(temperature=0.5),
72-
)
7389
```
90+
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
91+
```
92+
93+
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the `json_schema` to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.

examples/model_providers/README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# Custom LLM providers
2+
3+
The examples in this directory demonstrate how you might use a non-OpenAI LLM provider. To run them, first set a base URL, API key and model.
4+
5+
```bash
6+
export EXAMPLE_BASE_URL="..."
7+
export EXAMPLE_API_KEY="..."
8+
export EXAMPLE_MODEL_NAME"..."
9+
```
10+
11+
Then run the examples, e.g.:
12+
13+
```
14+
python examples/model_providers/custom_example_provider.py
15+
16+
Loops within themselves,
17+
Function calls its own being,
18+
Depth without ending.
19+
```
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
import asyncio
2+
import os
3+
4+
from openai import AsyncOpenAI
5+
6+
from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled
7+
8+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
9+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
10+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
11+
12+
if not BASE_URL or not API_KEY or not MODEL_NAME:
13+
raise ValueError(
14+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
15+
)
16+
17+
"""This example uses a custom provider for a specific agent. Steps:
18+
1. Create a custom OpenAI client.
19+
2. Create a `Model` that uses the custom client.
20+
3. Set the `model` on the Agent.
21+
22+
Note that in this example, we disable tracing under the assumption that you don't have an API key
23+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
24+
or call set_tracing_export_api_key() to set a tracing specific key.
25+
"""
26+
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
27+
set_tracing_disabled(disabled=True)
28+
29+
# An alternate approach that would also work:
30+
# PROVIDER = OpenAIProvider(openai_client=client)
31+
# agent = Agent(..., model="some-custom-model")
32+
# Runner.run(agent, ..., run_config=RunConfig(model_provider=PROVIDER))
33+
34+
35+
@function_tool
36+
def get_weather(city: str):
37+
print(f"[debug] getting weather for {city}")
38+
return f"The weather in {city} is sunny."
39+
40+
41+
async def main():
42+
# This agent will use the custom LLM provider
43+
agent = Agent(
44+
name="Assistant",
45+
instructions="You only respond in haikus.",
46+
model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client),
47+
tools=[get_weather],
48+
)
49+
50+
result = await Runner.run(agent, "What's the weather in Tokyo?")
51+
print(result.final_output)
52+
53+
54+
if __name__ == "__main__":
55+
asyncio.run(main())
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
import asyncio
2+
import os
3+
4+
from openai import AsyncOpenAI
5+
6+
from agents import (
7+
Agent,
8+
Runner,
9+
function_tool,
10+
set_default_openai_api,
11+
set_default_openai_client,
12+
set_tracing_disabled,
13+
)
14+
15+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
16+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
17+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
18+
19+
if not BASE_URL or not API_KEY or not MODEL_NAME:
20+
raise ValueError(
21+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
22+
)
23+
24+
25+
"""This example uses a custom provider for all requests by default. We do three things:
26+
1. Create a custom client.
27+
2. Set it as the default OpenAI client, and don't use it for tracing.
28+
3. Set the default API as Chat Completions, as most LLM providers don't yet support Responses API.
29+
30+
Note that in this example, we disable tracing under the assumption that you don't have an API key
31+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
32+
or call set_tracing_export_api_key() to set a tracing specific key.
33+
"""
34+
35+
client = AsyncOpenAI(
36+
base_url=BASE_URL,
37+
api_key=API_KEY,
38+
)
39+
set_default_openai_client(client=client, use_for_tracing=False)
40+
set_default_openai_api("chat_completions")
41+
set_tracing_disabled(disabled=True)
42+
43+
44+
@function_tool
45+
def get_weather(city: str):
46+
print(f"[debug] getting weather for {city}")
47+
return f"The weather in {city} is sunny."
48+
49+
50+
async def main():
51+
agent = Agent(
52+
name="Assistant",
53+
instructions="You only respond in haikus.",
54+
model=MODEL_NAME,
55+
tools=[get_weather],
56+
)
57+
58+
result = await Runner.run(agent, "What's the weather in Tokyo?")
59+
print(result.final_output)
60+
61+
62+
if __name__ == "__main__":
63+
asyncio.run(main())
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
import os
5+
6+
from openai import AsyncOpenAI
7+
8+
from agents import (
9+
Agent,
10+
Model,
11+
ModelProvider,
12+
OpenAIChatCompletionsModel,
13+
RunConfig,
14+
Runner,
15+
function_tool,
16+
set_tracing_disabled,
17+
)
18+
19+
BASE_URL = os.getenv("EXAMPLE_BASE_URL") or ""
20+
API_KEY = os.getenv("EXAMPLE_API_KEY") or ""
21+
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or ""
22+
23+
if not BASE_URL or not API_KEY or not MODEL_NAME:
24+
raise ValueError(
25+
"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
26+
)
27+
28+
29+
"""This example uses a custom provider for some calls to Runner.run(), and direct calls to OpenAI for
30+
others. Steps:
31+
1. Create a custom OpenAI client.
32+
2. Create a ModelProvider that uses the custom client.
33+
3. Use the ModelProvider in calls to Runner.run(), only when we want to use the custom LLM provider.
34+
35+
Note that in this example, we disable tracing under the assumption that you don't have an API key
36+
from platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var
37+
or call set_tracing_export_api_key() to set a tracing specific key.
38+
"""
39+
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
40+
set_tracing_disabled(disabled=True)
41+
42+
43+
class CustomModelProvider(ModelProvider):
44+
def get_model(self, model_name: str | None) -> Model:
45+
return OpenAIChatCompletionsModel(model=model_name or MODEL_NAME, openai_client=client)
46+
47+
48+
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
49+
50+
51+
@function_tool
52+
def get_weather(city: str):
53+
print(f"[debug] getting weather for {city}")
54+
return f"The weather in {city} is sunny."
55+
56+
57+
async def main():
58+
agent = Agent(name="Assistant", instructions="You only respond in haikus.", tools=[get_weather])
59+
60+
# This will use the custom model provider
61+
result = await Runner.run(
62+
agent,
63+
"What's the weather in Tokyo?",
64+
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
65+
)
66+
print(result.final_output)
67+
68+
# If you uncomment this, it will use OpenAI directly, not the custom provider
69+
# result = await Runner.run(
70+
# agent,
71+
# "What's the weather in Tokyo?",
72+
# )
73+
# print(result.final_output)
74+
75+
76+
if __name__ == "__main__":
77+
asyncio.run(main())

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai-agents"
3-
version = "0.0.3"
3+
version = "0.0.4"
44
description = "OpenAI Agents SDK"
55
readme = "README.md"
66
requires-python = ">=3.9"

src/agents/__init__.py

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,19 @@
9292
from .usage import Usage
9393

9494

95-
def set_default_openai_key(key: str) -> None:
96-
"""Set the default OpenAI API key to use for LLM requests and tracing. This is only necessary if
97-
the OPENAI_API_KEY environment variable is not already set.
95+
def set_default_openai_key(key: str, use_for_tracing: bool = True) -> None:
96+
"""Set the default OpenAI API key to use for LLM requests (and optionally tracing(). This is
97+
only necessary if the OPENAI_API_KEY environment variable is not already set.
9898
9999
If provided, this key will be used instead of the OPENAI_API_KEY environment variable.
100+
101+
Args:
102+
key: The OpenAI key to use.
103+
use_for_tracing: Whether to also use this key to send traces to OpenAI. Defaults to True
104+
If False, you'll either need to set the OPENAI_API_KEY environment variable or call
105+
set_tracing_export_api_key() with the API key you want to use for tracing.
100106
"""
101-
_config.set_default_openai_key(key)
107+
_config.set_default_openai_key(key, use_for_tracing)
102108

103109

104110
def set_default_openai_client(client: AsyncOpenAI, use_for_tracing: bool = True) -> None:
@@ -123,10 +129,9 @@ def set_default_openai_api(api: Literal["chat_completions", "responses"]) -> Non
123129

124130
def enable_verbose_stdout_logging():
125131
"""Enables verbose logging to stdout. This is useful for debugging."""
126-
for name in ["openai.agents", "openai.agents.tracing"]:
127-
logger = logging.getLogger(name)
128-
logger.setLevel(logging.DEBUG)
129-
logger.addHandler(logging.StreamHandler(sys.stdout))
132+
logger = logging.getLogger("openai.agents")
133+
logger.setLevel(logging.DEBUG)
134+
logger.addHandler(logging.StreamHandler(sys.stdout))
130135

131136

132137
__all__ = [

src/agents/_config.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,18 @@
55
from .tracing import set_tracing_export_api_key
66

77

8-
def set_default_openai_key(key: str) -> None:
9-
set_tracing_export_api_key(key)
8+
def set_default_openai_key(key: str, use_for_tracing: bool) -> None:
109
_openai_shared.set_default_openai_key(key)
1110

11+
if use_for_tracing:
12+
set_tracing_export_api_key(key)
13+
1214

1315
def set_default_openai_client(client: AsyncOpenAI, use_for_tracing: bool) -> None:
16+
_openai_shared.set_default_openai_client(client)
17+
1418
if use_for_tracing:
1519
set_tracing_export_api_key(client.api_key)
16-
_openai_shared.set_default_openai_client(client)
1720

1821

1922
def set_default_openai_api(api: Literal["chat_completions", "responses"]) -> None:

src/agents/function_schema.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,9 @@ class FuncSchema:
3333
"""The signature of the function."""
3434
takes_context: bool = False
3535
"""Whether the function takes a RunContextWrapper argument (must be the first argument)."""
36+
strict_json_schema: bool = True
37+
"""Whether the JSON schema is in strict mode. We **strongly** recommend setting this to True,
38+
as it increases the likelihood of correct JSON input."""
3639

3740
def to_call_args(self, data: BaseModel) -> tuple[list[Any], dict[str, Any]]:
3841
"""
@@ -337,4 +340,5 @@ def function_schema(
337340
params_json_schema=json_schema,
338341
signature=sig,
339342
takes_context=takes_context,
343+
strict_json_schema=strict_json_schema,
340344
)

0 commit comments

Comments
 (0)