Skip to content

feat: display model information in session UI#203

Merged
wesm merged 15 commits intomainfrom
mcboarder289/copilot-parsing-updates
Mar 22, 2026
Merged

feat: display model information in session UI#203
wesm merged 15 commits intomainfrom
mcboarder289/copilot-parsing-updates

Conversation

@wesm
Copy link
Copy Markdown
Owner

@wesm wesm commented Mar 19, 2026

Summary

  • Parse model info from Copilot session.model_change events, stamping ParsedMessage.Model on assistant messages
  • Add computeMainModel() frontend utility that derives the session's primary model client-side from loaded messages (no DB schema changes)
  • Show model badge in session header when a main model is detected
  • Show per-message model badge when a message uses a different model than the session main
  • Show model in subagent toggle header after expanding

Design decisions

  • Client-side derivation: Main model is computed from the messages array already loaded in the frontend, avoiding a main_model column on the sessions table and associated SQLite/Postgres migrations. Can be promoted to a DB column later if needed.
  • Off-main-model only: Per-message badges only appear when the model differs from the session's main model, reducing visual noise.
  • No badges in subagent expansions: The isSubagentContext guard prevents incorrect comparisons against the parent session's main model. Subagents show their model in the toggle header instead.
  • Full model strings: No vendor prefix stripping. Full model name displayed everywhere.

Test plan

  • Go: Copilot parser tests for model tracking (single model, no model, mid-session switch)
  • Frontend: computeMainModel() unit tests (empty, single, majority, tie-break, user messages ignored)
  • Frontend build passes
  • All existing Go and frontend tests pass

wesm and others added 11 commits March 19, 2026 12:11
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add `session.model_change` event handling to the Copilot JSONL parser.
The current model is stamped onto each assistant message via
`ParsedMessage.Model`, matching behaviour of the Claude, Codex, and
Gemini parsers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Compute the most frequently used model across assistant messages
- Returns empty string if no model data is present
- Tie-break alphabetically

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Import computeMainModel utility
- Add subagentModel derived value from lazily-loaded messages
- Display model badge in toggle header after token counts
- Style with muted text color and fixed font size

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Gate session header and per-message model badges on full message
  history being loaded (hasOlder === false) and matching session ID
- Allow Copilot model_change with empty newModel to clear active model
- Add test for model reset edge case
@roborev-ci
Copy link
Copy Markdown

roborev-ci bot commented Mar 19, 2026

roborev: Combined Review (eb9e704)

Summary Verdict: The
PR introduces functional state-coupling bugs and severe performance regressions (O(N^2)) in model badge rendering that require centralizing the mainModel computation.

High

  • Location: frontend/src/lib/components/content/MessageContent.svelte:42

  • Problem: computeMainModel(messagesStore.messages) is evaluated independently inside each message component instance. For a session with N messages, this causes an O(N^2) recomputation across the list every time the messages store updates (which happens continuously during token streaming), leading to
    severe UI lag or freezes.

    • Fix: Compute mainModel once centrally (e.g., as a derived value inside messages.svelte.ts or computed in the parent list component) and pass the single precomputed value down to each message component.

Medium


Location**: frontend/src/lib/components/content/MessageContent.svelte:42
* Problem: The off-main-model badge is computed from messagesStore.messages, which is the globally active session message list, not the message’s own session. Any MessageContent rendered outside the active session context, or during a session switch before the new messages finish loading, can show the wrong model badge.
* Fix: Derive the baseline model from the owning session’s messages, or pass the correct session/main-model into MessageContent from the parent.

  • Location: frontend/src/lib/components/layout/SessionBreadcrumb.svelte:62
    • Problem: The breadcrumb computes mainModel using messagesStore.messages, which strictly contains the currently active session's messages. If the breadcrumb
      is used to render parent/ancestor sessions in a navigation trail, it will incorrectly display the active child session's main model instead of its own.
    • Fix: Determine the model using data specific to the session prop (e.g., by adding a main_model field
      to the Session metadata on the backend) rather than relying on the active session's message store.

Synthesized from 3 reviews (agents: codex, gemini | types: default, security)

@roborev-ci
Copy link
Copy Markdown

roborev-ci bot commented Mar 19, 2026

roborev: Combined Review (a20501b)

Verdict: The pull request successfully adds model tracking and UI badges, but introduces a performance regression in message rendering and a correctness issue with subagent model derivation.

Medium

  • Location: frontend/src/lib/components/content /MessageContent.svelte#L39
    Problem: computeMainModel(messagesStore.messages) iterates
    over the entire messages array for every MessageContent instance. Because this component is instantiated for every visible message, this causes redundant O(N) recalculations per list item, turning a linear update into quadratic O(N²) work that can make large conversations sluggish.
    Fix: Centralize the computation of mainModel into messagesStore.svelte.ts (e.g., as a single $derived property) so it is only evaluated once per update, and reference that store property from MessageContent.

  • Location: frontend/src/lib/components/content/SubagentInline.svelte#L5 7
    Problem: The subagent badge derives its "main model" from the currently loaded messages subset, but it lacks a completeness guard. If the history is paginated or partially loaded, longer subagent sessions can display the wrong model in the expanded header.
    Fix: Only compute and show the badge after confirming the full subagent history is loaded, or plumb pagination metadata through this component and suppress the badge when
    older messages are still omitted.


Synthesized from 3 reviews (agents: codex, gemini | types: default, security)

Move computeMainModel into a $derived property on MessagesStore so it
computes once per store update. Components now read
messagesStore.mainModel instead of each computing independently.
@roborev-ci
Copy link
Copy Markdown

roborev-ci bot commented Mar 19, 2026

roborev: Combined Review (98b42a1)

Verdict: All agents agree the code is clean; no issues were found.


Synthesized from 3 reviews (agents: codex, gemini | types: default, security)

@roborev-ci
Copy link
Copy Markdown

roborev-ci bot commented Mar 19, 2026

roborev: Combined Review (e682131)

Verdict: The PR successfully adds model tracking and badges, but a logic flaw hides model information for paginated sessions.

Medium

  • Location: frontend/src/lib/stores/messages.svelte.ts:22
    Problem: main Model is forced to "" whenever hasOlder is true, which suppresses the session-header badge and off-main-model message badges for any session that is still paginated. For large sessions that never fully hydrate, the new UI never shows model information at all.
    Fix: Dec
    ouple mainModel from hasOlder and compute it from a complete session-level source (or a background aggregate) so long sessions can still display stable model info without requiring the full history to be loaded.

Synthesized from 3 reviews (agents: codex, gemini | types: default, security)

Remove the hasOlder guard from mainModel derivation so that sessions
loaded progressively (>20k messages) still compute and display model
badges from the loaded messages rather than suppressing them entirely.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@roborev-ci
Copy link
Copy Markdown

roborev-ci bot commented Mar 22, 2026

roborev: Combined Review (f4b1a44)

Summary Verdict: The changes successfully implement model tracking and badges, but introduce an issue with inaccurate model calculations for paginated sessions.

Medium

  • Location: frontend/src/lib/stores/messages.svelte.ts:22

Problem: mainModel is computed from this.messages whenever the active session finishes loading, but this.messages can be only a paginated subset when hasOlder is true. This makes the session header badge and the per-message "off main model" badges incorrect for large sessions
whenever the dominant model is in the unloaded portion of the history.

  • Fix: Only derive mainModel when the full session history is loaded, or source the model aggregate from backend/session metadata so paginated sessions use whole-session data instead of the current page.

Synthesized from 3 reviews (agents: codex, gemini | types: default, security)

@wesm
Copy link
Copy Markdown
Owner Author

wesm commented Mar 22, 2026

Accepted risk, this is a tradeoff that there will be esoteric cases that arise with very large sessions.

@wesm wesm merged commit e512967 into main Mar 22, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant