1 add codex exec support as an llm provider using pre existing chatgpt pro accts#2457
Conversation
Agent-Logs-Url: https://github.com/FloraSync/deepagents/sessions/84b8ad80-a861-4d81-968f-0faf860b77a1 Co-authored-by: shoesCodeFor <15178737+shoesCodeFor@users.noreply.github.com>
[WIP] Add codex exec support as LLM provider in deepagents-cli
|
This PR has been automatically closed because it does not link to an approved issue. All external contributions must reference an approved issue or discussion. Please:
Maintainers: reopen this PR or remove the |
There was a problem hiding this comment.
Security Issues
-
Argument Injection
The prompt string is passed as a positional argument to an external CLI (codex exec) without an end-of-options marker. If the prompt begins with a dash (e.g., "--help"), the codex CLI may interpret it as a flag rather than data, altering execution or causing unintended behavior. -
Sensitive Information Exposure via Process Arguments
User/LLM-controlled prompt content is sent as a command-line argument to the codex binary. On many systems, process command-line arguments are visible to other local users (e.g., via ps or /proc), potentially exposing sensitive data contained in prompts.
|
|
||
| try: | ||
| result = subprocess.run( | ||
| ["codex", "exec", "--model", self.model, prompt], # noqa: S603, S607 |
There was a problem hiding this comment.
The prompt is passed directly as a positional argument to an external CLI without an end-of-options marker. If the prompt begins with a dash (e.g., "--help"), the codex CLI may interpret it as a flag (argument injection), altering behavior or causing unintended execution paths.
result = subprocess.run(
["codex", "exec", "--model", self.model, prompt], # noqa: S603, S607
capture_output=True,
text=True,
timeout=self.timeout_seconds,
check=False,
)Remediation: Add the conventional -- end-of-options sentinel before the prompt to ensure it is always treated as data, not as flags.
| ["codex", "exec", "--model", self.model, prompt], # noqa: S603, S607 | |
| ["codex", "exec", "--model", self.model, "--", prompt], # noqa: S603, S607 |
For more details, see the finding in Corridor.
Provide feedback: Reply with whether this is a valid vulnerability or false positive to help improve Corridor's accuracy.
|
Sorry about this, I had not meant for this to open a PR against the upstream. Just experimenting with codex support on an experimental fork. |
Fixes #
Read the full contributing guidelines: https://docs.langchain.com/oss/python/contributing/overview
If you paste a large clearly AI generated description here your PR may be IGNORED or CLOSED!
Thank you for contributing to Deep Agents! Follow these steps to have your pull request considered as ready for review.
make format,make lintandmake testfrom the root of the package(s) you've modified.Additional guidelines:
uv.lockfiles or add dependencies topyproject.tomlfiles (even optional ones) unless you have explicit permission to do so by a maintainer.Social handles (optional)
Twitter: @
LinkedIn: https://linkedin.com/in/