Skip to content

feat: sticky header with shared model selector#40

Merged
batmn-dev merged 18 commits intomainfrom
feat/sticky-header-solution-b
Feb 21, 2026
Merged

feat: sticky header with shared model selector#40
batmn-dev merged 18 commits intomainfrom
feat/sticky-header-solution-b

Conversation

@batmn-dev
Copy link
Owner

@batmn-dev batmn-dev commented Feb 21, 2026

Summary

  • Migrates the app header from fixed to sticky positioning, eliminating scroll compensation hacks and simplifying the layout architecture
  • Implements a ChatGPT-style 3-column header layout (sidebar trigger | model selector | action buttons) with proper flex distribution
  • Creates a shared MultiModelSelectionProvider context so the header model selector stays in sync with multi-model chat state
  • Removes the model selector from the prompt input area (both single and multi-model modes) — it now lives exclusively in the header

Key changes

  • CSS foundation: New --header-height: 52px variable, h-svh viewport units, sticky header with z-20
  • Layout refactor: layout-app.tsx wraps children in MultiModelSelectionProvider; main content uses min-h-0 flex-1 overflow-hidden
  • Header: 3-column flex layout with ModelSelectorHeader supporting both mode="single" and mode="multi"
  • Scroll compensation removed: Deleted all paddingTop / marginTop offset calculations from conversation.tsx and multi-conversation.tsx
  • New file: lib/model-store/multi-model-provider.tsx — React Context for shared multi-model selection state (uses React 19 render-sync pattern)

Test plan

  • Typecheck passes (tsc --noEmit)
  • Lint passes (eslint .)
  • Build passes (Next.js 16.1.6 Turbopack, 25 static pages)
  • All 348 tests pass (34 test files)
  • Mobile responsive testing verified
  • Multi-model selection syncs between header and chat
  • Single-model selection persists across sessions
  • Stop button remains actionable during streaming
  • Sidebar toggle works on mobile without layout shift

🤖 Generated with Claude Code


Summary by cubic

Migrated the app header to sticky with a ChatGPT-style 3-column layout and a shared model selector that stays in sync across single and multi-model chats. Removed scroll compensation hacks and moved model selection out of the prompt input for a cleaner, consistent UI.

  • New Features

    • Sticky header (z-20) with opaque background; standardized height at 52px using a new CSS variable.
    • Header model selector via ModelSelectorHeader, visible in all modes; supports single/multi selection.
    • Shared MultiModelSelectionProvider context to sync selection state across header and chat.
  • Refactors

    • Removed prompt-input model selector and all fixed-header scroll offsets; main content is now a flex column with min-h-0, flex-1, overflow-hidden.
    • Updated viewport usage to h-svh and container naming to @container/main; reduced header padding to match ChatGPT.
    • Adjusted CSS variables and scroll area math (header no longer included in scroll calculations) and aligned z-index hierarchy so portals render above the header.

Written for commit 2f98650. Summary will update on new commits.

batmn-dev and others added 18 commits February 20, 2026 12:45
…itions

- Add lastUsedModel to ModelProvider backed by localStorage, updating
  model priority chain: chat model > last used > first favorite > default
- Cache favoriteModels in localStorage for faster hydration
- Clear local model override on chat navigation to prevent model bleed
- Integrate lastUsedModel into multi-chat flow
- Simplify all dialog/popover/menu/sheet/select animations to opacity-only
  at 100ms (remove scale transforms, reduce from 150-250ms)
- Add animated prop to Dialog for disabling transitions (used by CommandDialog)
- Update plan with audio roadmap, Rive/PixelLab exploration, and OpenClaw
  text-initiated chat evaluation

Co-authored-by: Cursor <cursoragent@cursor.com>
…nd expose OpenAI reasoning summaries

- Replace `marked` with `unified`/`remark-parse` for block splitting so parsing
  and rendering use the same remark pipeline, fixing subtle parse divergences
- Add `remark-math` + `rehype-katex` for LaTeX math rendering (inline & display)
- Add custom `remark-unwrap-link-parens` plugin to strip wrapping parens from links
- Show descriptive link text in `LinkMarkdown` instead of always showing domain
- Expand Shiki syntax highlighting to 30+ languages (C, C++, Go, Ruby, Rust, etc.)
- Add `reasoningSummary: "auto"` to OpenAI provider options so reasoning text is
  actually returned instead of empty deltas
- Add opaque reasoning support: non-expandable label when reasoning exists but
  the model doesn't expose visible text
- Simplify `useReasoningPhase` logic and `useLoadingState` reasoning detection
- Add footnote and KaTeX styles to globals.css
- Update provider-reasoning-config skill with reasoningSummary docs
- Add prompt delivery default to AGENTS.md and CLAUDE.md
- Update plan.md with edit/resend bug and renumber items

Co-authored-by: Cursor <cursoragent@cursor.com>
… plan items #43-#46

- Expand ChatGPT prompt styles reference with shadow-short, Tailwind shadow
  internals, and verified composer surface computed styles (light/dark)
- Add chat widgets research plan
- Add chrome-devtools-mcp and web-inspector-mcp skills
- Add plan items: motion preference (#43), rich-text composer (#44),
  voice/dictation (#45), mobile file inputs (#46) with critical path updates

Co-authored-by: Cursor <cursoragent@cursor.com>
Move research/ and troubleshooting/ out of context/ subdirectory,
remove obsolete decisions/ and web-inspector-mcp skill, update
chrome-devtools-mcp skill, and add design-token-extraction skill.

Co-authored-by: Cursor <cursoragent@cursor.com>
Relocate chat-widgets, prompt-input, and sidebar reference files
into .agents/design/chatgpt-reference/ for clearer organization.

Co-authored-by: Cursor <cursoragent@cursor.com>
…d add design reference docs

Update cross-references after the .agents/ directory restructure
(.agents/context/research/ → .agents/research/, etc.), add ChatGPT
HTML structure and design token references, and add html-structure-extraction skill.

Co-authored-by: Cursor <cursoragent@cursor.com>
…traction skills

Move browser extraction research from memories/ and research/ into
.agents/context/research/browser-extraction/. Add composer shadow
design tokens to globals.css, composer border/shadow analysis, AI SDK
video generation research, and two new extraction skills.

Co-authored-by: Cursor <cursoragent@cursor.com>
- Change --spacing-app-header from 56px to 52px
- Add --header-z-index: 20 and --spacing-scroll-padding-top: 0px to :root
- Remove header height from --spacing-scroll-area calculation (header is
  now sticky inside scroll context, so it no longer reduces viewport)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Migrate header from fixed to sticky positioning and restructure
into a 3-column layout (left/center/right) to align with ChatGPT's
header architecture.

Key changes:
- Header: fixed → sticky (z-index 50 → 20, bg-transparent → bg-background)
- Layout: 2-section → 3-column with centered model selector
- Viewport: h-dvh → h-svh on root layout
- Container: @container → @container/main (named)
- Padding: px-4 → px-2 (8px like ChatGPT)
- New ModelSelectorHeader component for header-level model selection
- Removed unused useSidebar/isCollapsed from header

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Remove gradient masks and reduce top padding (pt-20 -> pt-4) in both
conversation and multi-conversation components. With the sticky header
now inside the scroll context, fixed-header compensation is no longer
needed.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Documents testing results for sticky header migration across four areas:
- Responsive breakpoints (500px, 768px, 1024px, 1440px) - all pass
- Accessibility (keyboard nav, screen reader, color contrast) - 24/25 pass
- Multi-model mode compatibility - all pass via code analysis
- Z-index hierarchy and dropdown layering - confirmed correct

One minor pre-existing issue found: light mode muted-foreground contrast
ratio (3.73:1) is below WCAG AA threshold. Not introduced by this migration.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Move model selector from center to left-align (matches ChatGPT)
- Fix input cropping by converting main element to flex column layout

Root causes:
1. Center section used justify-center instead of justify-start
2. Main element was a scroll container (overflow-y-auto) with h-svh,
   but Chat component used h-full (= h-svh), so total content height
   was header (52px) + svh, pushing input 52px below viewport

The fix makes main a flex column with overflow-hidden, adds a flex-1
min-h-0 wrapper for children so they fill exactly svh minus header
height (52px), preventing any content from extending below viewport.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Remove flex-1 from left and right sections (use natural width)
- Keep flex-1 only on center section (fills remaining space)
- Left section now 0px when empty (sidebar on desktop)
- Model selector now appears at left edge (~268px) matching ChatGPT

ChatGPT structure:
  Left: flex-grow 0, width 0px (empty)
  Center: flex-grow 1 (fills space)
  Right: flex-grow 0, width 136px (content)

Previous (wrong):
  All three sections had flex-1 (equal thirds)
Add two skills to prevent reference implementation mistakes:

1. Reference Implementation Verification
   - Systematic measurement verification against reference designs
   - Prevents flex-grow misinterpretation (3-column ≠ equal thirds)
   - Catches position discrepancies (264px vs 653px)
   - Creates verification checklists from extracted data

2. Layout Math Verification
   - Audits height calculations for viewport unit containers
   - Prevents overflow bugs (header in-flow + h-full = overflow)
   - Required when changing positioning schemes (fixed → sticky)
   - Provides verification scripts and formulas

These skills address root causes of bugs found during sticky header
migration (model selector position, input cropping).
Previously, the model selector was hidden in multi-model mode, leaving
the header center section empty. This created an almost-invisible
header with only minimal content in the right section.

The fix removes the `!isMultiModelEnabled` condition, making the model
selector always visible when logged in. This matches ChatGPT's design
where the model selector is always present in the header center section.

In multi-model mode, the header selector shows the last used model
while the input area handles multi-model selection - this provides
visual consistency and prevents the empty header issue.

Fixes regression where header appeared invisible/missing in multi-model
mode due to empty content sections.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The header model selector was broken in multi-model mode because the
selectedModelIds state lived locally in MultiChat, inaccessible to the
header.

Solution: Lift state to a shared MultiModelSelectionProvider context
that wraps both Header and MultiChat via LayoutApp. Now both the header
selector (mode="multi") and the input selector read/write the same
selectedModelIds, staying in sync.

Architecture:
- MultiModelSelectionProvider (new) — shared selectedModelIds state
- LayoutApp wraps children with provider
- ModelSelectorHeader adapts mode based on multiModelEnabled preference
- MultiChat consumes from context instead of local state

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Model selection now lives exclusively in the header via
ModelSelectorHeader, which adapts to single or multi-model mode.
The prompt input no longer needs its own selector.

Changes:
- ChatInput: remove ModelSelector, remove onSelectModel prop
  (keep selectedModel — still needed for search/file-upload detection)
- MultiChatInput: remove ModelSelector, remove onSelectedModelIdsChange
  (keep selectedModelIds — still needed for send-button validation)
- chat.tsx: remove handleModelChange from input props
- multi-chat.tsx: remove setSelectedModelIds from input props
- Update test fixtures to match new prop signatures

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add no-timeline-estimates rule and no-branch-creation rule to AGENTS.md
and CLAUDE.md. Include planning artifacts for sticky header solution B:
implementation plan, dependency analysis, z-index hierarchy, test plan,
and ChatGPT header layout analysis reference.

Co-authored-by: Cursor <cursoragent@cursor.com>
@vercel
Copy link

vercel bot commented Feb 21, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
not-a-wrapper Ready Ready Preview, Comment Feb 21, 2026 2:03am

@greptile-apps
Copy link

greptile-apps bot commented Feb 21, 2026

Too many files changed for review. (153 files found, 100 file limit)

@batmn-dev batmn-dev merged commit abfc317 into main Feb 21, 2026
6 checks passed
Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 153 files

Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name=".agents/plans/solution-b-implementation-plan.md">

<violation number="1" location=".agents/plans/solution-b-implementation-plan.md:4">
P2: Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@@ -0,0 +1,1090 @@
# Solution B: Hybrid Sticky Header — Implementation Plan
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .agents/plans/solution-b-implementation-plan.md, line 4:

<comment>Remove timeline/effort estimates from this plan; project guidelines explicitly forbid durations and time estimates in plans.</comment>

<file context>
@@ -0,0 +1,1090 @@
+# Solution B: Hybrid Sticky Header — Implementation Plan
+
+**Target:** ChatGPT-aligned sticky header with preserved StickToBottom auto-scroll
+**Timeline:** 5-6 days
+**Risk Level:** Medium
+**Agent Model:** Opus 4.6 for all agents
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant