Skip to content

Commit cb0a285

Browse files
committed
fix conflict
2 parents 9578600 + 6e078bf commit cb0a285

33 files changed

+988
-81
lines changed

docs/ja/mcp.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,13 @@ Agents SDK は MCP をサポートしており、これにより幅広い MCP
1212

1313
## MCP サーバー
1414

15-
現在、MCP 仕様では使用するトランスポート方式に基づき 2 種類のサーバーが定義されています。
15+
現在、MCP 仕様では使用するトランスポート方式に基づき 3 種類のサーバーが定義されています。
1616

17-
1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
17+
1. **stdio** サーバー: アプリケーションのサブプロセスとして実行されます。ローカルで動かすイメージです。
1818
2. **HTTP over SSE** サーバー: リモートで動作し、 URL 経由で接続します。
19+
3. **Streamable HTTP** サーバー: MCP 仕様に定義された Streamable HTTP トランスポートを使用してリモートで動作します。
1920

20-
これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] クラスを使用して接続できます。
21+
これらのサーバーへは [`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] クラスを使用して接続できます。
2122

2223
たとえば、[公式 MCP filesystem サーバー](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem)を利用する場合は次のようになります。
2324

@@ -46,7 +47,7 @@ agent=Agent(
4647

4748
## キャッシュ
4849

49-
エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse] の両方に `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。
50+
エージェントが実行されるたびに、MCP サーバーへ `list_tools()` が呼び出されます。サーバーがリモートの場合は特にレイテンシが発生します。ツール一覧を自動でキャッシュしたい場合は、[`MCPServerStdio`][agents.mcp.server.MCPServerStdio][`MCPServerSse`][agents.mcp.server.MCPServerSse][`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] の各クラスに `cache_tools_list=True` を渡してください。ツール一覧が変更されないと確信できる場合のみ使用してください。
5051

5152
キャッシュを無効化したい場合は、サーバーで `invalidate_tools_cache()` を呼び出します。
5253

docs/ja/tools.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ OpenAI は [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIRespons
1717
- [`WebSearchTool`][agents.tool.WebSearchTool] はエージェントに Web 検索を行わせます。
1818
- [`FileSearchTool`][agents.tool.FileSearchTool] は OpenAI ベクトルストアから情報を取得します。
1919
- [`ComputerTool`][agents.tool.ComputerTool] はコンピュータ操作タスクを自動化します。
20+
- [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] はサンドボックス環境でコードを実行します。
21+
- [`HostedMCPTool`][agents.tool.HostedMCPTool] はリモート MCP サーバーのツールをモデルから直接利用できるようにします。
22+
- [`ImageGenerationTool`][agents.tool.ImageGenerationTool] はプロンプトから画像を生成します。
23+
- [`LocalShellTool`][agents.tool.LocalShellTool] はローカルマシンでシェルコマンドを実行します。
2024

2125
```python
2226
from agents import Agent, FileSearchTool, Runner, WebSearchTool

docs/mcp.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,9 @@ Currently, the MCP spec defines two kinds of servers, based on the transport mec
1212

1313
1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
1414
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
15+
3. **Streamable HTTP** servers run remotely using the Streamable HTTP transport defined in the MCP spec.
1516

16-
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
17+
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp] classes to connect to these servers.
1718

1819
For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).
1920

@@ -42,7 +43,7 @@ agent=Agent(
4243

4344
## Caching
4445

45-
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
46+
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to [`MCPServerStdio`][agents.mcp.server.MCPServerStdio], [`MCPServerSse`][agents.mcp.server.MCPServerSse], and [`MCPServerStreamableHttp`][agents.mcp.server.MCPServerStreamableHttp]. You should only do this if you're certain the tool list will not change.
4647

4748
If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.
4849

docs/tools.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@ OpenAI offers a few built-in tools when using the [`OpenAIResponsesModel`][agent
1313
- The [`WebSearchTool`][agents.tool.WebSearchTool] lets an agent search the web.
1414
- The [`FileSearchTool`][agents.tool.FileSearchTool] allows retrieving information from your OpenAI Vector Stores.
1515
- The [`ComputerTool`][agents.tool.ComputerTool] allows automating computer use tasks.
16+
- The [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] lets the LLM execute code in a sandboxed environment.
17+
- The [`HostedMCPTool`][agents.tool.HostedMCPTool] exposes a remote MCP server's tools to the model.
18+
- The [`ImageGenerationTool`][agents.tool.ImageGenerationTool] generates images from a prompt.
19+
- The [`LocalShellTool`][agents.tool.LocalShellTool] runs shell commands on your machine.
1620

1721
```python
1822
from agents import Agent, FileSearchTool, Runner, WebSearchTool

docs/tracing.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,3 +115,4 @@ To customize this default setup, to send traces to alternative or additional bac
115115
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
116116
- [Langtrace](https://docs.langtrace.ai/supported-integrations/llm-frameworks/openai-agents-sdk)
117117
- [Okahu-Monocle](https://github.com/monocle2ai/monocle)
118+
- [Galileo](https://v2docs.galileo.ai/integrations/openai-agent-integration#openai-agent-integration)

examples/hosted_mcp/__init__.py

Whitespace-only changes.

examples/hosted_mcp/approvals.py

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
import argparse
2+
import asyncio
3+
4+
from agents import (
5+
Agent,
6+
HostedMCPTool,
7+
MCPToolApprovalFunctionResult,
8+
MCPToolApprovalRequest,
9+
Runner,
10+
)
11+
12+
"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
13+
approval callbacks."""
14+
15+
16+
def approval_callback(request: MCPToolApprovalRequest) -> MCPToolApprovalFunctionResult:
17+
answer = input(f"Approve running the tool `{request.data.name}`? (y/n) ")
18+
result: MCPToolApprovalFunctionResult = {"approve": answer == "y"}
19+
if not result["approve"]:
20+
result["reason"] = "User denied"
21+
return result
22+
23+
24+
async def main(verbose: bool, stream: bool):
25+
agent = Agent(
26+
name="Assistant",
27+
tools=[
28+
HostedMCPTool(
29+
tool_config={
30+
"type": "mcp",
31+
"server_label": "gitmcp",
32+
"server_url": "https://gitmcp.io/openai/codex",
33+
"require_approval": "always",
34+
},
35+
on_approval_request=approval_callback,
36+
)
37+
],
38+
)
39+
40+
if stream:
41+
result = Runner.run_streamed(agent, "Which language is this repo written in?")
42+
async for event in result.stream_events():
43+
if event.type == "run_item_stream_event":
44+
print(f"Got event of type {event.item.__class__.__name__}")
45+
print(f"Done streaming; final result: {result.final_output}")
46+
else:
47+
res = await Runner.run(agent, "Which language is this repo written in?")
48+
print(res.final_output)
49+
50+
if verbose:
51+
for item in result.new_items:
52+
print(item)
53+
54+
55+
if __name__ == "__main__":
56+
parser = argparse.ArgumentParser()
57+
parser.add_argument("--verbose", action="store_true", default=False)
58+
parser.add_argument("--stream", action="store_true", default=False)
59+
args = parser.parse_args()
60+
61+
asyncio.run(main(args.verbose, args.stream))

examples/hosted_mcp/simple.py

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
import argparse
2+
import asyncio
3+
4+
from agents import Agent, HostedMCPTool, Runner
5+
6+
"""This example demonstrates how to use the hosted MCP support in the OpenAI Responses API, with
7+
approvals not required for any tools. You should only use this for trusted MCP servers."""
8+
9+
10+
async def main(verbose: bool, stream: bool):
11+
agent = Agent(
12+
name="Assistant",
13+
tools=[
14+
HostedMCPTool(
15+
tool_config={
16+
"type": "mcp",
17+
"server_label": "gitmcp",
18+
"server_url": "https://gitmcp.io/openai/codex",
19+
"require_approval": "never",
20+
}
21+
)
22+
],
23+
)
24+
25+
if stream:
26+
result = Runner.run_streamed(agent, "Which language is this repo written in?")
27+
async for event in result.stream_events():
28+
if event.type == "run_item_stream_event":
29+
print(f"Got event of type {event.item.__class__.__name__}")
30+
print(f"Done streaming; final result: {result.final_output}")
31+
else:
32+
res = await Runner.run(agent, "Which language is this repo written in?")
33+
print(res.final_output)
34+
# The repository is primarily written in multiple languages, including Rust and TypeScript...
35+
36+
if verbose:
37+
for item in result.new_items:
38+
print(item)
39+
40+
41+
if __name__ == "__main__":
42+
parser = argparse.ArgumentParser()
43+
parser.add_argument("--verbose", action="store_true", default=False)
44+
parser.add_argument("--stream", action="store_true", default=False)
45+
args = parser.parse_args()
46+
47+
asyncio.run(main(args.verbose, args.stream))

examples/tools/code_interpreter.py

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
import asyncio
2+
3+
from agents import Agent, CodeInterpreterTool, Runner, trace
4+
5+
6+
async def main():
7+
agent = Agent(
8+
name="Code interpreter",
9+
instructions="You love doing math.",
10+
tools=[
11+
CodeInterpreterTool(
12+
tool_config={"type": "code_interpreter", "container": {"type": "auto"}},
13+
)
14+
],
15+
)
16+
17+
with trace("Code interpreter example"):
18+
print("Solving math problem...")
19+
result = Runner.run_streamed(agent, "What is the square root of273 * 312821 plus 1782?")
20+
async for event in result.stream_events():
21+
if (
22+
event.type == "run_item_stream_event"
23+
and event.item.type == "tool_call_item"
24+
and event.item.raw_item.type == "code_interpreter_call"
25+
):
26+
print(f"Code interpreter code:\n```\n{event.item.raw_item.code}\n```\n")
27+
elif event.type == "run_item_stream_event":
28+
print(f"Other event: {event.item.type}")
29+
30+
print(f"Final output: {result.final_output}")
31+
32+
33+
if __name__ == "__main__":
34+
asyncio.run(main())

examples/tools/image_generator.py

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
import asyncio
2+
import base64
3+
import os
4+
import subprocess
5+
import sys
6+
import tempfile
7+
8+
from agents import Agent, ImageGenerationTool, Runner, trace
9+
10+
11+
def open_file(path: str) -> None:
12+
if sys.platform.startswith("darwin"):
13+
subprocess.run(["open", path], check=False) # macOS
14+
elif os.name == "nt": # Windows
15+
os.astartfile(path) # type: ignore
16+
elif os.name == "posix":
17+
subprocess.run(["xdg-open", path], check=False) # Linux/Unix
18+
else:
19+
print(f"Don't know how to open files on this platform: {sys.platform}")
20+
21+
22+
async def main():
23+
agent = Agent(
24+
name="Image generator",
25+
instructions="You are a helpful agent.",
26+
tools=[
27+
ImageGenerationTool(
28+
tool_config={"type": "image_generation", "quality": "low"},
29+
)
30+
],
31+
)
32+
33+
with trace("Image generation example"):
34+
print("Generating image, this may take a while...")
35+
result = await Runner.run(
36+
agent, "Create an image of a frog eating a pizza, comic book style."
37+
)
38+
print(result.final_output)
39+
for item in result.new_items:
40+
if (
41+
item.type == "tool_call_item"
42+
and item.raw_item.type == "image_generation_call"
43+
and (img_result := item.raw_item.result)
44+
):
45+
with tempfile.NamedTemporaryFile(suffix=".png", delete=False) as tmp:
46+
tmp.write(base64.b64decode(img_result))
47+
temp_path = tmp.name
48+
49+
# Open the image
50+
open_file(temp_path)
51+
52+
53+
if __name__ == "__main__":
54+
asyncio.run(main())

0 commit comments

Comments
 (0)