-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Description
Feature Area
Core functionality
Is your feature request related to a an existing bug? Please link it here.
N/A
Describe the solution you'd like
I'd like to add native support for OpenAI's Responses API (/v1/responses) as a new LLM provider in CrewAI. This new API offers significant advantages over the traditional Chat Completions API for agent-based workflows:
Key Benefits:
Simpler input format - Use plain strings or structured input instead of complex message arrays
Built-in conversation management - Stateful interactions with previous_response_id for multi-turn conversations
Native tool support - Cleaner function calling semantics
Streaming - Real-time token streaming with simpler event handling
Structured outputs - Native JSON schema validation with Pydantic models
Better support for o-series reasoning models - reasoning_effort parameter for o1/o3/o4 models
Proposed Implementation:
New LLM Provider Class: OpenAIResponsesCompletion that extends BaseLLM
LLM Factory Integration: Support via provider="openai_responses" parameter or model prefix openai_responses/gpt-4o
Full CrewAI Compatibility: Works seamlessly with Agent, Task, and Crew classes
Usage Examples:
from crewai import Agent, Task, Crew
from crewai.llm import LLM
# Option 1: Using provider parameter
llm = LLM(model="gpt-4o", provider="openai_responses")
# Option 2: Using model prefix
llm = LLM(model="openai_responses/gpt-4o-mini")
# Works with all CrewAI components
agent = Agent(
role="Research Analyst",
goal="Find and summarize information",
backstory="Expert researcher",
llm=llm,
)
task = Task(
description="Research the topic",
expected_output="Summary",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()Describe alternatives you've considered
No response
Additional context
No response
Willingness to Contribute
Yes, I'd be happy to submit a pull request