Skip to content

1 add codex exec support as an llm provider using pre existing chatgpt pro accts#2457

Closed
Schuyler Ankele (shoesCodeFor) wants to merge 3 commits intolangchain-ai:mainfrom
FloraSync:1-add-codex-exec-support-as-an-llm-provider-using-pre-existing-chatgpt-pro-accts
Closed

1 add codex exec support as an llm provider using pre existing chatgpt pro accts#2457
Schuyler Ankele (shoesCodeFor) wants to merge 3 commits intolangchain-ai:mainfrom
FloraSync:1-add-codex-exec-support-as-an-llm-provider-using-pre-existing-chatgpt-pro-accts

Conversation

@shoesCodeFor
Copy link
Copy Markdown

Fixes #

Read the full contributing guidelines: https://docs.langchain.com/oss/python/contributing/overview

All contributions must be in English. See the language policy.

If you paste a large clearly AI generated description here your PR may be IGNORED or CLOSED!

Thank you for contributing to Deep Agents! Follow these steps to have your pull request considered as ready for review.

  1. PR title: Should follow the format: TYPE(SCOPE): DESCRIPTION
  1. PR description:
  • Write 1-2 sentences summarizing the change.
  • If this PR addresses a specific issue, please include "Fixes #ISSUE_NUMBER" in the description to automatically close the issue when the PR is merged.
  • If there are any breaking changes, please clearly describe them.
  • If this PR depends on another PR being merged first, please include "Depends on #PR_NUMBER" in the description.
  1. Run make format, make lint and make test from the root of the package(s) you've modified.
  • We will not consider a PR unless these three are passing in CI.
  1. How did you verify your code works?

Additional guidelines:

  • We ask that if you use generative AI for your contribution, you include a disclaimer.
  • PRs should not touch more than one package unless absolutely necessary.
  • Do not update the uv.lock files or add dependencies to pyproject.toml files (even optional ones) unless you have explicit permission to do so by a maintainer.

Social handles (optional)

Twitter: @
LinkedIn: https://linkedin.com/in/

Copilot AI and others added 3 commits April 5, 2026 07:10
[WIP] Add codex exec support as LLM provider in deepagents-cli
@github-actions github-actions bot added cli Related to `deepagents-cli` size: M 200-499 LOC labels Apr 5, 2026
@org-membership-reviewer org-membership-reviewer bot added new-contributor external User is not a member of the `langchain-ai` GitHub organization labels Apr 5, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 5, 2026

This PR has been automatically closed because it does not link to an approved issue.

All external contributions must reference an approved issue or discussion. Please:

  1. Find or open an issue describing the change
  2. Wait for a maintainer to approve and assign you
  3. Add Fixes #<issue_number>, Closes #<issue_number>, or Resolves #<issue_number> to your PR description and the PR will be reopened automatically

Maintainers: reopen this PR or remove the missing-issue-link label to bypass this check.

@github-actions github-actions bot closed this Apr 5, 2026
Copy link
Copy Markdown
Contributor

@corridor-security corridor-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security Issues

  • Argument Injection
    The prompt string is passed as a positional argument to an external CLI (codex exec) without an end-of-options marker. If the prompt begins with a dash (e.g., "--help"), the codex CLI may interpret it as a flag rather than data, altering execution or causing unintended behavior.

  • Sensitive Information Exposure via Process Arguments
    User/LLM-controlled prompt content is sent as a command-line argument to the codex binary. On many systems, process command-line arguments are visible to other local users (e.g., via ps or /proc), potentially exposing sensitive data contained in prompts.


try:
result = subprocess.run(
["codex", "exec", "--model", self.model, prompt], # noqa: S603, S607
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prompt is passed directly as a positional argument to an external CLI without an end-of-options marker. If the prompt begins with a dash (e.g., "--help"), the codex CLI may interpret it as a flag (argument injection), altering behavior or causing unintended execution paths.

result = subprocess.run(
    ["codex", "exec", "--model", self.model, prompt],  # noqa: S603, S607
    capture_output=True,
    text=True,
    timeout=self.timeout_seconds,
    check=False,
)

Remediation: Add the conventional -- end-of-options sentinel before the prompt to ensure it is always treated as data, not as flags.

Suggested change
["codex", "exec", "--model", self.model, prompt], # noqa: S603, S607
["codex", "exec", "--model", self.model, "--", prompt], # noqa: S603, S607

For more details, see the finding in Corridor.

Provide feedback: Reply with whether this is a valid vulnerability or false positive to help improve Corridor's accuracy.

@shoesCodeFor
Copy link
Copy Markdown
Author

Schuyler Ankele (shoesCodeFor) commented Apr 5, 2026

Sorry about this, I had not meant for this to open a PR against the upstream. Just experimenting with codex support on an experimental fork.

@shoesCodeFor Schuyler Ankele (shoesCodeFor) deleted the 1-add-codex-exec-support-as-an-llm-provider-using-pre-existing-chatgpt-pro-accts branch April 5, 2026 07:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cli Related to `deepagents-cli` external User is not a member of the `langchain-ai` GitHub organization missing-issue-link new-contributor size: M 200-499 LOC

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants