Skip to content

Conversation

rahulrajaram
Copy link

@rahulrajaram rahulrajaram commented Aug 10, 2025

This change addresses the following two issues:

This change redesigns the chat streaming model to avoid committing
partial rows into scrollback. While streaming, partial content is shown
only in a transient live overlay above the composer. When a stream
completes, the assistant answer and the reasoning are each rendered as
a single Markdown block in the conversation history, preceded by clear
section headers for readability. In addition, the live overlay behavior
and preview width are now configurable via CLI flags, and these options
are plumbed through the TUI to the ChatWidget.

Summary:

  • Add CLI flags: --max-cols, --live-rows; plumb through App → ChatWidget.
  • Implement live ring overlay with soft‑wrap by default; hard‑cap via --max-cols.
  • Redesign streaming: avoid committing partial rows; use overlay during stream.
  • Finalize: render assistant answer and reasoning as single markdown blocks.
  • Emit section headers (thinking/codex) at finalize for tidy history.
  • Default to no preview width cap; remove hard 80‑col pin.
  • Respect preview width for tool previews when --max-cols is set.

I have read the CLA Document and I hereby sign the CLA” on a Pull Request, you (“Contributor”) agree to the following terms for any past and future “Contributions” submitted to the OpenAI Codex CLI project (the “Project”)

This change addresses the following two issues:

- openai#1968
- openai#2012

This change redesigns the chat streaming model to avoid committing
partial rows into scrollback. While streaming, partial content is shown
only in a transient live overlay above the composer. When a stream
completes, the assistant answer and the reasoning are each rendered as
a single Markdown block in the conversation history, preceded by clear
section headers for readability. In addition, the live overlay behavior
and preview width are now configurable via CLI flags, and these options
are plumbed through the TUI to the ChatWidget.

Summary:

- Add CLI flags: --max-cols, --live-rows; plumb through App → ChatWidget.
- Implement live ring overlay with soft‑wrap by default; hard‑cap via --max-cols.
- Redesign streaming: avoid committing partial rows; use overlay during stream.
- Finalize: render assistant answer and reasoning as single markdown blocks.
- Emit section headers (thinking/codex) at finalize for tidy history.
- Default to no preview width cap; remove hard 80‑col pin.
- Respect preview width for tool previews when --max-cols is set.
Copy link

github-actions bot commented Aug 10, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@rahulrajaram
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Aug 11, 2025
@ethe
Copy link

ethe commented Aug 11, 2025

This change redesigns the chat streaming model to avoid committing
partial rows into scrollback. While streaming, partial content is shown
only in a transient live overlay above the composer. When a stream
completes, the assistant answer and the reasoning are each rendered as
a single Markdown block in the conversation history, preceded by clear
section headers for readability. In addition, the live overlay behavior
and preview width are now configurable via CLI flags, and these options
are plumbed through the TUI to the ChatWidget.

Considering the hard wrapping in your PR messages, In reasonable doubt you used codex cli to automatically generate the message and copied and pasted it here 😉

@rahulrajaram
Copy link
Author

Lol @ethe , got me! I edit commit l-messages and wordwrap using Vim

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants