feat: sticky header with shared model selector#40
Merged
Conversation
…itions - Add lastUsedModel to ModelProvider backed by localStorage, updating model priority chain: chat model > last used > first favorite > default - Cache favoriteModels in localStorage for faster hydration - Clear local model override on chat navigation to prevent model bleed - Integrate lastUsedModel into multi-chat flow - Simplify all dialog/popover/menu/sheet/select animations to opacity-only at 100ms (remove scale transforms, reduce from 150-250ms) - Add animated prop to Dialog for disabling transitions (used by CommandDialog) - Update plan with audio roadmap, Rive/PixelLab exploration, and OpenClaw text-initiated chat evaluation Co-authored-by: Cursor <cursoragent@cursor.com>
…nd expose OpenAI reasoning summaries - Replace `marked` with `unified`/`remark-parse` for block splitting so parsing and rendering use the same remark pipeline, fixing subtle parse divergences - Add `remark-math` + `rehype-katex` for LaTeX math rendering (inline & display) - Add custom `remark-unwrap-link-parens` plugin to strip wrapping parens from links - Show descriptive link text in `LinkMarkdown` instead of always showing domain - Expand Shiki syntax highlighting to 30+ languages (C, C++, Go, Ruby, Rust, etc.) - Add `reasoningSummary: "auto"` to OpenAI provider options so reasoning text is actually returned instead of empty deltas - Add opaque reasoning support: non-expandable label when reasoning exists but the model doesn't expose visible text - Simplify `useReasoningPhase` logic and `useLoadingState` reasoning detection - Add footnote and KaTeX styles to globals.css - Update provider-reasoning-config skill with reasoningSummary docs - Add prompt delivery default to AGENTS.md and CLAUDE.md - Update plan.md with edit/resend bug and renumber items Co-authored-by: Cursor <cursoragent@cursor.com>
… plan items #43-#46 - Expand ChatGPT prompt styles reference with shadow-short, Tailwind shadow internals, and verified composer surface computed styles (light/dark) - Add chat widgets research plan - Add chrome-devtools-mcp and web-inspector-mcp skills - Add plan items: motion preference (#43), rich-text composer (#44), voice/dictation (#45), mobile file inputs (#46) with critical path updates Co-authored-by: Cursor <cursoragent@cursor.com>
Move research/ and troubleshooting/ out of context/ subdirectory, remove obsolete decisions/ and web-inspector-mcp skill, update chrome-devtools-mcp skill, and add design-token-extraction skill. Co-authored-by: Cursor <cursoragent@cursor.com>
Relocate chat-widgets, prompt-input, and sidebar reference files into .agents/design/chatgpt-reference/ for clearer organization. Co-authored-by: Cursor <cursoragent@cursor.com>
…d add design reference docs Update cross-references after the .agents/ directory restructure (.agents/context/research/ → .agents/research/, etc.), add ChatGPT HTML structure and design token references, and add html-structure-extraction skill. Co-authored-by: Cursor <cursoragent@cursor.com>
…traction skills Move browser extraction research from memories/ and research/ into .agents/context/research/browser-extraction/. Add composer shadow design tokens to globals.css, composer border/shadow analysis, AI SDK video generation research, and two new extraction skills. Co-authored-by: Cursor <cursoragent@cursor.com>
- Change --spacing-app-header from 56px to 52px - Add --header-z-index: 20 and --spacing-scroll-padding-top: 0px to :root - Remove header height from --spacing-scroll-area calculation (header is now sticky inside scroll context, so it no longer reduces viewport) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Migrate header from fixed to sticky positioning and restructure into a 3-column layout (left/center/right) to align with ChatGPT's header architecture. Key changes: - Header: fixed → sticky (z-index 50 → 20, bg-transparent → bg-background) - Layout: 2-section → 3-column with centered model selector - Viewport: h-dvh → h-svh on root layout - Container: @container → @container/main (named) - Padding: px-4 → px-2 (8px like ChatGPT) - New ModelSelectorHeader component for header-level model selection - Removed unused useSidebar/isCollapsed from header Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Remove gradient masks and reduce top padding (pt-20 -> pt-4) in both conversation and multi-conversation components. With the sticky header now inside the scroll context, fixed-header compensation is no longer needed. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Documents testing results for sticky header migration across four areas: - Responsive breakpoints (500px, 768px, 1024px, 1440px) - all pass - Accessibility (keyboard nav, screen reader, color contrast) - 24/25 pass - Multi-model mode compatibility - all pass via code analysis - Z-index hierarchy and dropdown layering - confirmed correct One minor pre-existing issue found: light mode muted-foreground contrast ratio (3.73:1) is below WCAG AA threshold. Not introduced by this migration. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Move model selector from center to left-align (matches ChatGPT) - Fix input cropping by converting main element to flex column layout Root causes: 1. Center section used justify-center instead of justify-start 2. Main element was a scroll container (overflow-y-auto) with h-svh, but Chat component used h-full (= h-svh), so total content height was header (52px) + svh, pushing input 52px below viewport The fix makes main a flex column with overflow-hidden, adds a flex-1 min-h-0 wrapper for children so they fill exactly svh minus header height (52px), preventing any content from extending below viewport. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Remove flex-1 from left and right sections (use natural width) - Keep flex-1 only on center section (fills remaining space) - Left section now 0px when empty (sidebar on desktop) - Model selector now appears at left edge (~268px) matching ChatGPT ChatGPT structure: Left: flex-grow 0, width 0px (empty) Center: flex-grow 1 (fills space) Right: flex-grow 0, width 136px (content) Previous (wrong): All three sections had flex-1 (equal thirds)
Add two skills to prevent reference implementation mistakes: 1. Reference Implementation Verification - Systematic measurement verification against reference designs - Prevents flex-grow misinterpretation (3-column ≠ equal thirds) - Catches position discrepancies (264px vs 653px) - Creates verification checklists from extracted data 2. Layout Math Verification - Audits height calculations for viewport unit containers - Prevents overflow bugs (header in-flow + h-full = overflow) - Required when changing positioning schemes (fixed → sticky) - Provides verification scripts and formulas These skills address root causes of bugs found during sticky header migration (model selector position, input cropping).
Previously, the model selector was hidden in multi-model mode, leaving the header center section empty. This created an almost-invisible header with only minimal content in the right section. The fix removes the `!isMultiModelEnabled` condition, making the model selector always visible when logged in. This matches ChatGPT's design where the model selector is always present in the header center section. In multi-model mode, the header selector shows the last used model while the input area handles multi-model selection - this provides visual consistency and prevents the empty header issue. Fixes regression where header appeared invisible/missing in multi-model mode due to empty content sections. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The header model selector was broken in multi-model mode because the selectedModelIds state lived locally in MultiChat, inaccessible to the header. Solution: Lift state to a shared MultiModelSelectionProvider context that wraps both Header and MultiChat via LayoutApp. Now both the header selector (mode="multi") and the input selector read/write the same selectedModelIds, staying in sync. Architecture: - MultiModelSelectionProvider (new) — shared selectedModelIds state - LayoutApp wraps children with provider - ModelSelectorHeader adapts mode based on multiModelEnabled preference - MultiChat consumes from context instead of local state Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Model selection now lives exclusively in the header via ModelSelectorHeader, which adapts to single or multi-model mode. The prompt input no longer needs its own selector. Changes: - ChatInput: remove ModelSelector, remove onSelectModel prop (keep selectedModel — still needed for search/file-upload detection) - MultiChatInput: remove ModelSelector, remove onSelectedModelIdsChange (keep selectedModelIds — still needed for send-button validation) - chat.tsx: remove handleModelChange from input props - multi-chat.tsx: remove setSelectedModelIds from input props - Update test fixtures to match new prop signatures Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add no-timeline-estimates rule and no-branch-creation rule to AGENTS.md and CLAUDE.md. Include planning artifacts for sticky header solution B: implementation plan, dependency analysis, z-index hierarchy, test plan, and ChatGPT header layout analysis reference. Co-authored-by: Cursor <cursoragent@cursor.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Too many files changed for review. ( |
There was a problem hiding this comment.
1 issue found across 153 files
Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".agents/plans/solution-b-implementation-plan.md">
<violation number="1" location=".agents/plans/solution-b-implementation-plan.md:4">
P2: Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -0,0 +1,1090 @@ | |||
| # Solution B: Hybrid Sticky Header — Implementation Plan | |||
There was a problem hiding this comment.
P2: Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .agents/plans/solution-b-implementation-plan.md, line 4:
<comment>Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.</comment>
<file context>
@@ -0,0 +1,1090 @@
+# Solution B: Hybrid Sticky Header — Implementation Plan
+
+**Target:** ChatGPT-aligned sticky header with preserved StickToBottom auto-scroll
+**Timeline:** 5-6 days
+**Risk Level:** Medium
+**Agent Model:** Opus 4.6 for all agents
</file context>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
fixedtostickypositioning, eliminating scroll compensation hacks and simplifying the layout architectureMultiModelSelectionProvidercontext so the header model selector stays in sync with multi-model chat stateKey changes
--header-height: 52pxvariable,h-svhviewport units, sticky header withz-20layout-app.tsxwraps children inMultiModelSelectionProvider; main content usesmin-h-0 flex-1 overflow-hiddenModelSelectorHeadersupporting bothmode="single"andmode="multi"paddingTop/marginTopoffset calculations fromconversation.tsxandmulti-conversation.tsxlib/model-store/multi-model-provider.tsx— React Context for shared multi-model selection state (uses React 19 render-sync pattern)Test plan
tsc --noEmit)eslint .)🤖 Generated with Claude Code
Summary by cubic
Migrated the app header to sticky with a ChatGPT-style 3-column layout and a shared model selector that stays in sync across single and multi-model chats. Removed scroll compensation hacks and moved model selection out of the prompt input for a cleaner, consistent UI.
New Features
Refactors
Written for commit 2f98650. Summary will update on new commits.