forked from openai/openai-agents-python
-
Notifications
You must be signed in to change notification settings - Fork 0
merge from origin #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
vcshih
wants to merge
240
commits into
veris-ai:v0.0.16-tool-call
Choose a base branch
from
openai:main
base: v0.0.16-tool-call
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+22,375
−3,271
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## Summary - avoid infinite recursion in visualization by tracking visited agents - test cycle detection in graph utility ## Testing - `make mypy` - `make tests` Resolves #668
## Summary - mention MCPServerStreamableHttp in MCP server docs - document CodeInterpreterTool, HostedMCPTool, ImageGenerationTool and LocalShellTool - update Japanese translations
## Summary - avoid AttributeError when Gemini API returns `None` for chat message - return empty output if message is filtered - add regression test ## Testing - `make format` - `make lint` - `make mypy` - `make tests` Towards #744
This PR adds Portkey AI as a tracing provider. Portkey helps you take your OpenAI agents from prototype to production. Portkey turns your experimental OpenAI Agents into production-ready systems by providing: - Complete observability of every agent step, tool use, and interaction - Built-in reliability with fallbacks, retries, and load balancing - Cost tracking and optimization to manage your AI spend - Access to 1600+ LLMs through a single integration - Guardrails to keep agent behavior safe and compliant - Version-controlled prompts for consistent agent performance Towards #786
### Summary Introduced the `RunErrorDetails` object to get partial results from a run interrupted by `MaxTurnsExceeded` exception. In this proposal the `RunErrorDetails` object contains all the fields from `RunResult` with `final_output` set to `None` and `output_guardrail_results` set to an empty list. We can decide to return less information. @rm-openai At the moment the exception doesn't return the `RunErrorDetails` object for the streaming mode. Do you have any suggestions on how to deal with it? In the `_check_errors` function of `agents/result.py` file. ### Test plan I have not implemented any tests currently, but if needed I can implement a basic test to retrieve partial data. ### Issue number This PR is an attempt to solve issue #719 ### Checks - [✅ ] I've added new tests (if relevant) - [ ] I've added/updated the relevant documentation - [ ✅] I've run `make lint` and `make format` - [ ✅] I've made sure tests pass
Small fix: Removing `import litellm.types` as its outside the try except block for importing litellm so the import error message isn't displayed, and the line actually isn't needed. I was reproducing a GitHub issue and came across this in the process.
### Overview This PR fixes a typo in the assert statement within the `handoff` function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for accuracy and clarity. ### Changes - Corrected the word “on_input” to “on_handoff” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
The documentation in `docs/mcp.md` listed three server types (stdio, HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of servers" in the heading. This PR fixes the numerical discrepancy. **Changes:** - Modified from "two kinds of servers" to "three kinds of servers". - File: `docs/mcp.md` (line 11).
Changed the function comment as input_guardrails only deals with input messages
### Overview This PR fixes a small typo in the docstring of the `is_strict_json_schema` abstract method of the `AgentOutputSchemaBase` class in `agent_output.py`. ### Changes - Corrected the word “valis” to “valid” in the docstring. ### Motivation Clear and correct documentation improves code readability and reduces confusion for users and contributors. ### Checklist - [x] I have reviewed the docstring after making the change. - [x] No functionality is affected. - [x] The change follows the repository’s contribution guidelines.
People keep trying to fix this, but its a breaking change.
The EmbeddedResource from MCP tool call contains a field with type AnyUrl that is not JSON-serializable. To avoid this exception, use item.model_dump(mode="json") to ensure a JSON-serializable return value.
### Summary: Towards #767. We were caching the list of tools for an agent, so if you did `agent.tools.append(...)` from a tool call, the next call to the model wouldn't include the new tool. THis is a bug. ### Test Plan: Unit tests. Note that now MCP tools are listed each time the agent runs (users can still cache the `list_tools` however).
Closes #796. Shouldn't start a busy waiting thread if there aren't any traces. Test plan ``` import threading assert threading.active_count() == 1 import agents assert threading.active_count() == 1 ```
### Summary: Allows a user to do `function_tool(is_enabled=<some_callable>)`; the callable is called when the agent runs. This allows you to dynamically enable/disable a tool based on the context/env. The meta-goal is to allow `Agent` to be effectively immutable. That enables some nice things down the line, and this allows you to dynamically modify the tools list without mutating the agent. ### Test Plan: Unit tests
## Summary - describe semantic versioning and release steps - add release page to documentation nav ## Testing - `make format` - `make lint` - `make mypy` - `make tests` - `make build-docs` ------ https://chatgpt.com/codex/tasks/task_i_68409d25afdc83218ad362d10c8a80a1
## Summary - ensure `Handoff.get_transfer_message` emits valid JSON - test transfer message validity ## Testing - `make format` - `make lint` - `make mypy` - `make tests` ------ https://chatgpt.com/codex/tasks/task_i_68432f925b048324a16878d28e850841
In deep agent workflows, each sub‐agent automatically performs an LLM step to summarize its tool calls before returning to its parent. This leads to: 1. Excessive latency: every nested agent invokes the LLM, compounding delays. 2. Loss of raw tool data: summaries may strip out details the top‐level agent needs. We discovered that `Agent.as_tool(...)` already accepts an (undocumented) `custom_output_extractor` parameter. By providing a callback, a parent agent can override what the sub‐agent returns e.g. hand back raw tool outputs or a custom slice so that only the final agent does summarization. --- This PR adds a “Custom output extraction” section to the Markdown docs under “Agents as tools,” with a minimal code example.
This PR fixes issue: #559 By adding the tool_call_id to the RunContextWrapper prior to calling tools. This gives the ability to access the tool_call_id in the implementation of the tool.
Sometimes users want to provide parameters specific to a model provider. This is an escape hatch.
## Summary - ensure `name_override` is always used in `function_schema` - test name override when docstring info is disabled ## Testing - `make format` - `make lint` - `make mypy` - `make tests` Resolves #860 ------ https://chatgpt.com/codex/tasks/task_i_684f1cf885b08321b4dd3f4294e24ca2
I replaced the `timedelta` parameters for MCP timeouts with `float` values, addressing issue #845 . Given that the MCP official repository has incorporated these changes in [this PR](modelcontextprotocol/python-sdk#941), updating the MCP version in openai-agents and specifying the timeouts as floats should be enough.
Add support for the new openai prompts feature.
## Summary Fixed a type safety issue where user input was being used as a string without conversion to integer, which could cause runtime errors and type mismatches. ## Problem The code was using `input()` which returns a string, but then using it directly in the f-string without converting it to an integer. This could cause: - Type mismatches when the string is passed to functions expecting integers - Runtime errors when users enter non-numeric input - Inconsistent behavior with the function signature expectations ## Changes Made Added proper input validation and type conversion: - Wrapped the input processing in a try-except block - Convert user input to integer using `int(user_input)` - Added error handling for invalid input with user-friendly message - Used the converted integer value in the f-string instead of raw string input This ensures type safety and provides better user experience with proper error handling.
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
This pull request migrates the translation script from o3 to gpt-5 model.
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
There was a problem with the current implementation, where for a single repsonse, we might have many different guardrails fire. We should have at most one per response.
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Some customers have reported that the agent loop can go on for a long time and use up the entire context window. This PR allows modifying the data sent to the model.
We were making deep copies, which is (1) inefficient and (2) causes some pickling errors. Instead, this PR just makes shallow copies, calling list.copy(). We do want a shallow copy so that mutations don't affect the original past-end list.
@rzhao-openai was seeing errors about incoming messages being too large. Turns out there's a default limit of 2**20 = 1,048,576 bytes.
### Summary Adds `is_enabled` parameter to `Agent.as_tool()` method for conditionally enabling/disabling agent tools at runtime. Supports boolean values and callable functions for dynamic tool filtering in multi-agent orchestration. ### Test plan - Added unit tests in `tests/test_agent_as_tool.py` - Added example in `examples/agent_patterns/agents_as_tools_conditional.py` - Updated documentation in `docs/tools.md` - All tests pass ### Issue number Closes #1097 ### Checks - [x] I've added new tests (if relevant) - [x] I've added/updated the relevant documentation - [x] I've run `make lint` and `make format` - [x] I've made sure tests pass --------- Co-authored-by: thein <[email protected]>
Automated update of translated documentation Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…p_on_first_tool` behavior (#1510)
…name" parameter for "input_file" items (#1513)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.