-
Notifications
You must be signed in to change notification settings - Fork 4.4k
tui: Fix markdown render and enable soft-wrap in chat output #2143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This change addresses the following two issues: - openai#1968 - openai#2012 This change redesigns the chat streaming model to avoid committing partial rows into scrollback. While streaming, partial content is shown only in a transient live overlay above the composer. When a stream completes, the assistant answer and the reasoning are each rendered as a single Markdown block in the conversation history, preceded by clear section headers for readability. In addition, the live overlay behavior and preview width are now configurable via CLI flags, and these options are plumbed through the TUI to the ChatWidget. Summary: - Add CLI flags: --max-cols, --live-rows; plumb through App → ChatWidget. - Implement live ring overlay with soft‑wrap by default; hard‑cap via --max-cols. - Redesign streaming: avoid committing partial rows; use overlay during stream. - Finalize: render assistant answer and reasoning as single markdown blocks. - Emit section headers (thinking/codex) at finalize for tidy history. - Default to no preview width cap; remove hard 80‑col pin. - Respect preview width for tool previews when --max-cols is set.
All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
Considering the hard wrapping in your PR messages, In reasonable doubt you used codex cli to automatically generate the message and copied and pasted it here 😉 |
Lol @ethe , got me! I edit commit l-messages and wordwrap using Vim |
This change addresses the following two issues:
This change redesigns the chat streaming model to avoid committing
partial rows into scrollback. While streaming, partial content is shown
only in a transient live overlay above the composer. When a stream
completes, the assistant answer and the reasoning are each rendered as
a single Markdown block in the conversation history, preceded by clear
section headers for readability. In addition, the live overlay behavior
and preview width are now configurable via CLI flags, and these options
are plumbed through the TUI to the ChatWidget.
Summary:
I have read the CLA Document and I hereby sign the CLA” on a Pull Request, you (“Contributor”) agree to the following terms for any past and future “Contributions” submitted to the OpenAI Codex CLI project (the “Project”)