Fix Local shell tool: return tool output to the LLM #1855
+217
−5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Based on prior work in #732, the Local Shell Tool was nearly complete but still didn’t work in practice. This PR finishes the logic, adds tests, and provides a working example.
Changes
Return tool output to the LLM
The local command ran, but its output was not returned to the LLM. This PR sends the output back, as described in the Local Shell guide: https://platform.openai.com/docs/guides/tools-local-shell
Use
call_id
Upstream (openai-python)
LocalShellCallOutput
usesid
, but the server actually expectscall_id
. This PR setscall_id
so the server accepts the output. A small type-check ignore workaround is included until the upstream type is fixed.Add
examples/tools/local_shell.py
a working example.Add Tests
Before fix
You would get:
and
After fix
The example runs successfully and returns the expected result.