Skip to content

merge from origin #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 237 commits into
base: v0.0.16-tool-call
Choose a base branch
from
Open

merge from origin #3

wants to merge 237 commits into from

Conversation

vcshih
Copy link

@vcshih vcshih commented Jul 21, 2025

No description provided.

handrew and others added 30 commits May 21, 2025 17:59
## Summary
- avoid infinite recursion in visualization by tracking visited agents
- test cycle detection in graph utility

## Testing
- `make mypy`
- `make tests` 

Resolves #668
## Summary
- mention MCPServerStreamableHttp in MCP server docs
- document CodeInterpreterTool, HostedMCPTool, ImageGenerationTool and
LocalShellTool
- update Japanese translations
## Summary
- avoid AttributeError when Gemini API returns `None` for chat message
- return empty output if message is filtered
- add regression test

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`

Towards #744
This PR adds Portkey AI as a tracing provider. Portkey helps you take
your OpenAI agents from prototype to production.

Portkey turns your experimental OpenAI Agents into production-ready
systems by providing:

- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 1600+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance


Towards #786
### Summary

Introduced the `RunErrorDetails` object to get partial results from a
run interrupted by `MaxTurnsExceeded` exception. In this proposal the
`RunErrorDetails` object contains all the fields from `RunResult` with
`final_output` set to `None` and `output_guardrail_results` set to an
empty list. We can decide to return less information.

@rm-openai At the moment the exception doesn't return the
`RunErrorDetails` object for the streaming mode. Do you have any
suggestions on how to deal with it? In the `_check_errors` function of
`agents/result.py` file.

### Test plan

I have not implemented any tests currently, but if needed I can
implement a basic test to retrieve partial data.

### Issue number

This PR is an attempt to solve issue #719 

### Checks

- [✅ ] I've added new tests (if relevant)
- [ ] I've added/updated the relevant documentation
- [ ✅] I've run `make lint` and `make format`
- [ ✅] I've made sure tests pass
Small fix:

Removing `import litellm.types` as its outside the try except block for
importing litellm so the import error message isn't displayed, and the
line actually isn't needed. I was reproducing a GitHub issue and came
across this in the process.
### Overview

This PR fixes a typo in the assert statement within the `handoff`
function in `handoffs.py`, changing `'on_input'` to `'on_handoff`' for
accuracy and clarity.

### Changes

- Corrected the word “on_input” to “on_handoff” in the docstring.

### Motivation

Clear and correct documentation improves code readability and reduces
confusion for users and contributors.

### Checklist

- [x] I have reviewed the docstring after making the change.
- [x] No functionality is affected.
- [x] The change follows the repository’s contribution guidelines.
The documentation in `docs/mcp.md` listed three server types (stdio,
HTTP over SSE, Streamable HTTP) but incorrectly stated "two kinds of
servers" in the heading. This PR fixes the numerical discrepancy.

**Changes:** 

- Modified from "two kinds of servers" to "three kinds of servers". 
- File: `docs/mcp.md` (line 11).
Changed the function comment as input_guardrails only deals with input
messages
### Overview

This PR fixes a small typo in the docstring of the
`is_strict_json_schema` abstract method of the `AgentOutputSchemaBase`
class in `agent_output.py`.

### Changes

- Corrected the word “valis” to “valid” in the docstring.

### Motivation

Clear and correct documentation improves code readability and reduces
confusion for users and contributors.

### Checklist

- [x] I have reviewed the docstring after making the change.
- [x] No functionality is affected.
- [x] The change follows the repository’s contribution guidelines.
People keep trying to fix this, but its a breaking change.
This pull request resolves #777; If you think we should introduce a new
item type for MCP call output, please let me know. As other hosted tools
use this event, I believe using the same should be good to go tho.
The EmbeddedResource from MCP tool call contains a field with type
AnyUrl that is not JSON-serializable. To avoid this exception, use
item.model_dump(mode="json") to ensure a JSON-serializable return value.
### Summary:
Towards #767. We were caching the list of tools for an agent, so if you
did `agent.tools.append(...)` from a tool call, the next call to the
model wouldn't include the new tool. THis is a bug.

### Test Plan:
Unit tests. Note that now MCP tools are listed each time the agent runs
(users can still cache the `list_tools` however).
Closes #796. Shouldn't start a busy waiting thread if there aren't any
traces.

Test plan
```
import threading
assert threading.active_count() == 1
import agents
assert threading.active_count() == 1
```
### Summary:
Allows a user to do `function_tool(is_enabled=<some_callable>)`; the
callable is called when the agent runs.

This allows you to dynamically enable/disable a tool based on the
context/env.

The meta-goal is to allow `Agent` to be effectively immutable. That
enables some nice things down the line, and this allows you to
dynamically modify the tools list without mutating the agent.

### Test Plan:
Unit tests
bump version
## Summary
- describe semantic versioning and release steps
- add release page to documentation nav

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`
- `make build-docs`


------
https://chatgpt.com/codex/tasks/task_i_68409d25afdc83218ad362d10c8a80a1
## Summary
- ensure `Handoff.get_transfer_message` emits valid JSON
- test transfer message validity

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`


------
https://chatgpt.com/codex/tasks/task_i_68432f925b048324a16878d28e850841
In deep agent workflows, each sub‐agent automatically performs an LLM
step to summarize its tool calls before returning to its parent. This
leads to:
1. Excessive latency: every nested agent invokes the LLM, compounding
delays.
2. Loss of raw tool data: summaries may strip out details the top‐level
agent needs.

We discovered that `Agent.as_tool(...)` already accepts an
(undocumented) `custom_output_extractor` parameter. By providing a
callback, a parent agent can override what the sub‐agent returns e.g.
hand back raw tool outputs or a custom slice so that only the final
agent does summarization.

---

This PR adds a “Custom output extraction” section to the Markdown docs
under “Agents as tools,” with a minimal code example.
This PR fixes issue:
#559

By adding the tool_call_id to the RunContextWrapper prior to calling
tools. This gives the ability to access the tool_call_id in the
implementation of the tool.
Sometimes users want to provide parameters specific to a model provider.
This is an escape hatch.
## Summary
- ensure `name_override` is always used in `function_schema`
- test name override when docstring info is disabled

## Testing
- `make format`
- `make lint`
- `make mypy`
- `make tests`

Resolves #860
------
https://chatgpt.com/codex/tasks/task_i_684f1cf885b08321b4dd3f4294e24ca2
I replaced the `timedelta` parameters for MCP timeouts with `float`
values, addressing issue #845 .

Given that the MCP official repository has incorporated these changes in
[this PR](modelcontextprotocol/python-sdk#941),
updating the MCP version in openai-agents and specifying the timeouts as
floats should be enough.
Add support for the new openai prompts feature.
abdullahimran49 and others added 30 commits August 13, 2025 14:32
…delSettings (#1439)

This pull request resolves #1407 ; the "minimal" reasoning effort param
is already supported.
This pull request adds a simple gpt-oss example app
## Summary

Fixed a type safety issue where user input was being used as a string
without conversion to integer, which could cause runtime errors and type
mismatches.

## Problem

The code was using `input()` which returns a string, but then using it
directly in the f-string without converting it to an integer. This could
cause:
- Type mismatches when the string is passed to functions expecting
integers
- Runtime errors when users enter non-numeric input
- Inconsistent behavior with the function signature expectations

## Changes Made

Added proper input validation and type conversion:
- Wrapped the input processing in a try-except block
- Convert user input to integer using `int(user_input)`
- Added error handling for invalid input with user-friendly message
- Used the converted integer value in the f-string instead of raw string
input

This ensures type safety and provides better user experience with proper
error handling.
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
This pull request migrates the translation script from o3 to gpt-5
model.
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
There was a problem with the current implementation, where for a single
repsonse, we might have many different guardrails fire. We should have
at most one per response.
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Some customers have reported that the agent loop can go on for a long
time and use up the entire context window. This PR allows modifying the
data sent to the model.
We were making deep copies, which is (1) inefficient and (2) causes some
pickling errors.

Instead, this PR just makes shallow copies, calling list.copy(). We do
want a shallow copy so that mutations don't affect the original past-end
list.
@rzhao-openai was seeing errors about incoming messages being too large.
Turns out there's a default limit of 2**20 = 1,048,576 bytes.
### Summary

Adds `is_enabled` parameter to `Agent.as_tool()` method for
conditionally enabling/disabling agent tools at runtime. Supports
boolean values and callable functions for dynamic tool filtering
  in multi-agent orchestration.

  ### Test plan

  - Added unit tests in `tests/test_agent_as_tool.py`
- Added example in
`examples/agent_patterns/agents_as_tools_conditional.py`
  - Updated documentation in `docs/tools.md`
  - All tests pass

  ### Issue number

  Closes #1097

  ### Checks

  - [x] I've added new tests (if relevant)
  - [x] I've added/updated the relevant documentation
  - [x] I've run `make lint` and `make format`
  - [x] I've made sure tests pass

---------

Co-authored-by: thein <[email protected]>
Automated update of translated documentation

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.