Skip to content

Conversation

@JasonTheAdams
Copy link
Member

This introduces support for code execution tools. This tool means that code can be sent to the model, where it's executed, and the results are returned. This also implements it with the OpenAI Responses API, which has the most complex support for code execution amongst the three "big" providers. The only commonality amongst the three is a container id, so that's the only explicitly supported parameter. Additional parameters are supported, but generically.

Base automatically changed from add/proper-openai-provider-implementation to trunk January 16, 2026 18:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants