Skip to content

Commit 743903d

Browse files
authored
chore: subject docs/*.md to Prettier checks (openai#4645)
Apparently we were not running our `pnpm run prettier` check in CI, so many files that were covered by the existing Prettier check were not well-formatted. This updates CI and formats the files.
1 parent 80f04ff commit 743903d

17 files changed

+117
-109
lines changed

.github/workflows/issue-deduplicator.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ jobs:
6060
- When unsure, prefer returning fewer matches.
6161
- Include at most five numbers.
6262
63-
output_schema: |
63+
output_schema: |
6464
{
6565
"type": "object",
6666
"properties": {

.github/workflows/issue-labeler.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,17 +43,17 @@ jobs:
4343
10. model-behavior — Undesirable LLM behavior: forgetting goals, refusing work, hallucinating environment details, quota misreports, or other reasoning/performance anomalies.
4444
4545
Issue number: ${{ github.event.issue.number }}
46-
46+
4747
Issue title:
4848
${{ github.event.issue.title }}
49-
49+
5050
Issue body:
5151
${{ github.event.issue.body }}
52-
52+
5353
Repository full name:
5454
${{ github.repository }}
5555
56-
output_schema: |
56+
output_schema: |
5757
{
5858
"type": "object",
5959
"properties": {

AGENTS.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,10 @@ In the codex-rs folder where the rust code lives:
1414
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
1515

1616
Run `just fmt` (in `codex-rs` directory) automatically after making Rust code changes; do not ask for approval to run it. Before finalizing a change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspace‑wide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Additionally, run the tests:
17+
1718
1. Run the test for the specific project that was changed. For example, if changes were made in `codex-rs/tui`, run `cargo test -p codex-tui`.
1819
2. Once those pass, if any changes were made in common, core, or protocol, run the complete test suite with `cargo test --all-features`.
19-
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
20+
When running interactively, ask the user before running `just fix` to finalize. `just fmt` does not require approval. project-specific or individual tests can be run without asking the user, but do ask the user before running the complete test suite.
2021

2122
## TUI style conventions
2223

@@ -32,6 +33,7 @@ See `codex-rs/tui/styles.md`.
3233
- Desired: vec![" └ ".into(), "M".red(), " ".dim(), "tui/src/app.rs".dim()]
3334

3435
### TUI Styling (ratatui)
36+
3537
- Prefer Stylize helpers: use "text".dim(), .bold(), .cyan(), .italic(), .underlined() instead of manual Style where possible.
3638
- Prefer simple conversions: use "text".into() for spans and vec![].into() for lines; when inference is ambiguous (e.g., Paragraph::new/Cell::from), use Line::from(spans) or Span::from(text).
3739
- Computed styles: if the Style is computed at runtime, using `Span::styled` is OK (`Span::from(text).set_style(style)` is also acceptable).
@@ -43,6 +45,7 @@ See `codex-rs/tui/styles.md`.
4345
- Compactness: prefer the form that stays on one line after rustfmt; if only one of Line::from(vec![]) or vec![].into() avoids wrapping, choose that. If both wrap, pick the one with fewer wrapped lines.
4446

4547
### Text wrapping
48+
4649
- Always use textwrap::wrap to wrap plain strings.
4750
- If you have a ratatui Line and you want to wrap it, use the helpers in tui/src/wrapping.rs, e.g. word_wrap_lines / word_wrap_line.
4851
- If you need to indent wrapped lines, use the initial_indent / subsequent_indent options from RtOptions if you can, rather than writing custom logic.
@@ -64,6 +67,7 @@ This repo uses snapshot tests (via `insta`), especially in `codex-rs/tui`, to va
6467
- `cargo insta accept -p codex-tui`
6568

6669
If you don’t have the tool:
70+
6771
- `cargo install cargo-insta`
6872

6973
### Test assertions

README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
21
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
32

43
<p align="center"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.
@@ -64,7 +63,6 @@ You can also use Codex with an API key, but this requires [additional setup](./d
6463

6564
Codex CLI supports [MCP servers](./docs/advanced.md#model-context-protocol-mcp). Enable by adding an `mcp_servers` section to your `~/.codex/config.toml`.
6665

67-
6866
### Configuration
6967

7068
Codex CLI supports a rich set of configuration options, with preferences stored in `~/.codex/config.toml`. For full configuration options, see [Configuration](./docs/config.md).

docs/CLA.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,18 +10,21 @@ past and future “Contributions” submitted to the **OpenAI Codex CLI projec
1010
---
1111

1212
## 1. Definitions
13+
1314
- **“Contribution”** – any original work of authorship submitted to the Project
1415
(code, documentation, designs, etc.).
1516
- **“You” / “Your”** – the individual (or legal entity) posting the acceptance
1617
comment.
1718

18-
## 2. Copyright License
19+
## 2. Copyright License
20+
1921
You grant **OpenAI, Inc.** and all recipients of software distributed by the
2022
Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable
2123
license to reproduce, prepare derivative works of, publicly display, publicly
2224
perform, sublicense, and distribute Your Contributions and derivative works.
2325

24-
## 3. Patent License
26+
## 3. Patent License
27+
2528
You grant **OpenAI, Inc.** and all recipients of the Project a perpetual,
2629
worldwide, non‑exclusive, royalty‑free, irrevocable (except as below) patent
2730
license to make, have made, use, sell, offer to sell, import, and otherwise
@@ -32,13 +35,15 @@ Contribution infringes a patent, the patent licenses granted by You to that
3235
entity under this CLA terminate.
3336

3437
## 4. Representations
35-
1. You are legally entitled to grant the licenses above.
38+
39+
1. You are legally entitled to grant the licenses above.
3640
2. Each Contribution is either Your original creation or You have authority to
37-
submit it under this CLA.
38-
3. Your Contributions are provided **“AS IS”** without warranties of any kind.
41+
submit it under this CLA.
42+
3. Your Contributions are provided **“AS IS”** without warranties of any kind.
3943
4. You will notify the Project if any statement above becomes inaccurate.
4044

41-
## 5. Miscellany
45+
## 5. Miscellany
46+
4247
This Agreement is governed by the laws of the **State of California**, USA,
4348
excluding its conflict‑of‑laws rules. If any provision is held unenforceable,
4449
the remaining provisions remain in force.

docs/advanced.md

Lines changed: 19 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -31,35 +31,38 @@ env = { "API_KEY" = "value" }
3131
The Codex CLI can also be run as an MCP _server_ via `codex mcp-server`. For example, you can use `codex mcp-server` to make Codex available as a tool inside of a multi-agent framework like the OpenAI [Agents SDK](https://platform.openai.com/docs/guides/agents). Use `codex mcp` separately to add/list/get/remove MCP server launchers in your configuration.
3232

3333
### Codex MCP Server Quickstart
34+
3435
You can launch a Codex MCP server with the [Model Context Protocol Inspector](https://modelcontextprotocol.io/legacy/tools/inspector):
3536

36-
``` bash
37+
```bash
3738
npx @modelcontextprotocol/inspector codex mcp-server
3839
```
40+
3941
Send a `tools/list` request and you will see that there are two tools available:
4042

4143
**`codex`** - Run a Codex session. Accepts configuration parameters matching the Codex Config struct. The `codex` tool takes the following properties:
4244

43-
Property | Type | Description
44-
-------------------|----------|----------------------------------------------------------------------------------------------------------
45-
**`prompt`** (required) | string | The initial user prompt to start the Codex conversation.
46-
`approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `never`.
47-
`base-instructions` | string | The set of instructions to use instead of the default ones.
48-
`config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`.
49-
`cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory.
50-
`include-plan-tool` | boolean | Whether to include the plan tool in the conversation.
51-
`model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`).
52-
`profile` | string | Configuration profile from `config.toml` to specify default options.
53-
`sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`.
45+
| Property | Type | Description |
46+
| ----------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
47+
| **`prompt`** (required) | string | The initial user prompt to start the Codex conversation. |
48+
| `approval-policy` | string | Approval policy for shell commands generated by the model: `untrusted`, `on-failure`, `never`. |
49+
| `base-instructions` | string | The set of instructions to use instead of the default ones. |
50+
| `config` | object | Individual [config settings](https://github.com/openai/codex/blob/main/docs/config.md#config) that will override what is in `$CODEX_HOME/config.toml`. |
51+
| `cwd` | string | Working directory for the session. If relative, resolved against the server process's current directory. |
52+
| `include-plan-tool` | boolean | Whether to include the plan tool in the conversation. |
53+
| `model` | string | Optional override for the model name (e.g. `o3`, `o4-mini`). |
54+
| `profile` | string | Configuration profile from `config.toml` to specify default options. |
55+
| `sandbox` | string | Sandbox mode: `read-only`, `workspace-write`, or `danger-full-access`. |
5456

5557
**`codex-reply`** - Continue a Codex session by providing the conversation id and prompt. The `codex-reply` tool takes the following properties:
5658

57-
Property | Type | Description
58-
-----------|--------|---------------------------------------------------------------
59-
**`prompt`** (required) | string | The next user prompt to continue the Codex conversation.
60-
**`conversationId`** (required) | string | The id of the conversation to continue.
59+
| Property | Type | Description |
60+
| ------------------------------- | ------ | -------------------------------------------------------- |
61+
| **`prompt`** (required) | string | The next user prompt to continue the Codex conversation. |
62+
| **`conversationId`** (required) | string | The id of the conversation to continue. |
6163

6264
### Trying it Out
65+
6366
> [!TIP]
6467
> Codex often takes a few minutes to run. To accommodate this, adjust the MCP inspector's Request and Total timeouts to 600000ms (10 minutes) under ⛭ Configuration.
6568

docs/authentication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,4 +65,4 @@ If you run Codex on a remote machine (VPS/server) without a local browser, the l
6565
ssh -L 1455:localhost:1455 <user>@<remote-host>
6666
```
6767

68-
Then, in that SSH session, run `codex` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.
68+
Then, in that SSH session, run `codex` and select "Sign in with ChatGPT". When prompted, open the printed URL (it will be `http://localhost:1455/...`) in your local browser. The traffic will be tunneled to the remote server.

0 commit comments

Comments
 (0)