docs: clarify context window vs total token budget #4858
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Resolves the confusion reported in #4728 about whether GPT-5-Codex has a 272k or 400k context window.
After analyzing the codebase (
codex-rs/core/src/openai_model_info.rs
), I confirmed that:model_context_window
value represents input tokens (272,000 for GPT-5-Codex)model_max_output_tokens
is output tokens (128,000)Changes
docs/config.md
to clarify thatmodel_context_window
refers to input tokensmodel_max_output_tokens
description to explain it's separate from input/status
shows 272k while platform docs say 400kTesting
Documentation-only changes - no code modified.
Fixes #4728