Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -728,13 +728,17 @@ show_raw_agent_reasoning = true # defaults to false

## model_context_window

The size of the context window for the model, in tokens.
The size of the context window for the model, in tokens. This represents the maximum **input** tokens (your prompts, conversation history, and context).

In general, Codex knows the context window for the most common OpenAI models, but if you are using a new model with an old version of the Codex CLI, then you can use `model_context_window` to tell Codex what value to use to determine how much context is left during a conversation.

> **Note:** For GPT-5-Codex, the input context window is 272,000 tokens, and the maximum output tokens is 128,000, for a total token budget of 400,000 tokens. When you run `/status`, the "Context window" field shows your **input** token limit (272k), not the total budget.

## model_max_output_tokens

This is analogous to `model_context_window`, but for the maximum number of output tokens for the model.
The maximum number of output tokens the model can generate in a single response. This is separate from the input context window.

For example, GPT-5-Codex has a 272,000 token input context window and a 128,000 token output limit, giving it a combined token budget of 400,000 tokens total.

## project_doc_max_bytes

Expand Down
8 changes: 8 additions & 0 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,11 @@ By default, Codex can modify files in your current working directory (Auto mode)
### Does it work on Windows?

Running Codex directly on Windows may work, but is not officially supported. We recommend using [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install).

### Why does `/status` show 272k context window when the platform docs say 400k?

The `/status` command shows the **input** context window (272,000 tokens for GPT-5-Codex), which is the maximum size for your prompts, conversation history, and context.

GPT-5-Codex has a separate **output** token limit of 128,000 tokens for responses. The total token budget is 400,000 tokens (272k input + 128k output), which is what the [platform documentation](https://platform.openai.com/docs/models/gpt-5-codex) refers to.

See [`model_context_window`](./config.md#model_context_window) and [`model_max_output_tokens`](./config.md#model_max_output_tokens) in the configuration docs for more details.