Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,50 @@ response = client.responses.create(
],
)
```
### Updated Response API Pattern
#### Important: Function calling differences

**⚠️ Breaking Change:** The Responses API uses a different function calling pattern than Chat Completions.

**Chat Completions pattern (❌ Not supported in Responses API):**
This DOES NOT work with Responses API
messages = [
{"role": "assistant", "tool_calls": [...]}, # ❌ Raises error
{"role": "tool", "content": "...", "tool_call_id": "..."} # ❌ Raises error
]

text

**Responses API pattern (βœ… Correct approach):**
from openai import OpenAI
client = OpenAI()

Initial request
response = client.responses.create(
model="gpt-4o",
tools=[{"type": "function", "name": "get_weather", ...}],
input=[{"role": "user", "content": "What's the weather?"}]
)

Append entire output to input
input_messages = response.output

Execute function calls
for item in response.output:
if item.type == "function_call":
result = execute_function(item.name, item.arguments)
input_messages.append({
"type": "function_call_output", # Use 'type', not 'role'
"call_id": item.call_id,
"output": result
})

Get final response
final = client.responses.create(model="gpt-4o", tools=tools, input=input_messages)

text

See `examples/responses/function_calling_migration.py` for a complete migration guide.

## Async usage

Expand Down
132 changes: 132 additions & 0 deletions docs/RESPONSES_MIGRATION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# Migrating Function Calling from Chat Completions to Responses API

## Overview

The Responses API (`/v1/responses`) uses a fundamentally different approach to function calling compared to the Chat Completions API (`/v1/chat/completions`). This guide helps you migrate existing code.

## Issue Reference

- GitHub Issue: [#2677](https://github.com/openai/openai-python/issues/2677)
- API Documentation: [Function Calling Guide](https://platform.openai.com/docs/guides/function-calling?api-mode=responses)

## Key Differences

| Aspect | Chat Completions | Responses API |
|--------|-----------------|---------------|
| **Function requests from model** | `role: "assistant"` with `tool_calls` | `type: "function_call"` items in output |
| **Function results to model** | `role: "tool"` messages | `type: "function_call_output"` items |
| **Conversation management** | Manual message construction | Append entire `response.output` |
| **Tool role support** | βœ… Supported | ❌ Not supported (raises error) |
| **Assistant tool_calls in input** | βœ… Supported | ❌ Not supported (raises error) |

## Error Messages You Might See

{
"error": {
"message": "Unknown parameter: 'input.tool_calls'.",
"type": "invalid_request_error"
}
}

text
undefined
{
"error": {
"message": "Invalid value: 'tool'. Supported values are: 'assistant', 'system', 'developer', and 'user'.",
"type": "invalid_request_error"
}
}

text

These errors indicate you're using Chat Completions patterns in the Responses API.

## Migration Steps

### Step 1: Remove Manual Tool Message Construction

**Before (Chat Completions):**
messages = [
{"role": "user", "content": "What's the weather?"},
{"role": "assistant", "tool_calls": [{"id": "call_123", ...}]},
{"role": "tool", "content": '{"temp": 20}', "tool_call_id": "call_123"}
]

text

**After (Responses API):**
input_messages = [
{"role": "user", "content": "What's the weather?"}
]

response = client.responses.create(model="gpt-4o", tools=tools, input=input_messages)
input_messages.extend(response.output) # Append entire output

text

### Step 2: Update Function Output Format

**Before:**
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})

text

**After:**
input_messages.append({
"type": "function_call_output",
"call_id": item.call_id,
"output": json.dumps(result)
})

text

### Step 3: Update Function Call Detection

**Before:**
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
# Execute function

text

**After:**
for item in response.output:
if item.type == "function_call":
# Execute function

text

## Complete Example

See [`examples/responses/function_calling_migration.py`](../examples/responses/function_calling_migration.py) for a fully working example.

## Alternative: Use Built-in Tools

For common tasks, consider using OpenAI's built-in tools instead of custom functions:

response = client.responses.create(
model="gpt-4o",
input="Search for latest AI news",
tools=[{"type": "web_search_preview"}] # No manual loop needed
)

print(response.output_text) # Automatically includes search results

text

## When to Use Each API

- **Use Chat Completions** if you need the traditional conversation format or are integrating with existing tooling
- **Use Responses API** for new projects, especially those using reasoning models or built-in tools

Both APIs are supported indefinitely.

## Additional Resources

- [Official Migration Guide](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
- [Function Calling Documentation](https://platform.openai.com/docs/guides/function-calling?api-mode=responses)
- [Example Code](../examples/responses/)
27 changes: 27 additions & 0 deletions examples/responses/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Responses API Examples

Examples demonstrating the Responses API functionality.

## Files

- **`function_calling_migration.py`** - Complete guide for migrating from Chat Completions tool calling to Responses API function calling. Addresses [Issue #2677](https://github.com/openai/openai-python/issues/2677).

## Key Differences from Chat Completions

The Responses API does **NOT** support:
- `role: "assistant"` messages with `tool_calls` in input
- `role: "tool"` messages

Instead, use:
- `type: "function_call"` items (from model output)
- `type: "function_call_output"` items (your function results)

See `function_calling_migration.py` for detailed examples.

## Running Examples

export OPENAI_API_KEY="your-api-key"
python examples/responses/function_calling_migration.py

text
undefined
Loading