Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Jan 7, 2026

Summary by CodeRabbit

  • New Features

    • Intelligent token limit calculation that dynamically adjusts based on model capabilities and reasoning features.
    • Enhanced system prompts with runtime context information (date, time, platform).
  • Improvements

    • Optimized output token allocation for better performance with different model configurations.
    • Simplified and streamlined internal prompt handling for more consistent behavior.
  • Tests

    • Added comprehensive test coverage for token calculation logic.

✏️ Tip: You can customize this high-level summary in your review settings.

- Add helper function to calculate safe default maxTokens
- Apply 32k global limit as safety cap
- Reserve space for thinking budget when reasoning is supported
- Update both Chat and NewThread modes to use smart defaults
- Remove hardcoded 8192 threshold logic
- Add comprehensive tests for the calculation logic
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

The PR refactors system prompt construction to be data-driven by passing context options to a unified enhancement function, removes browser context dependencies from agent prompts, introduces safe token limit calculations respecting model and reasoning constraints, abstracts model query handling with a new handle wrapper, and auto-opens DevTools for tabs in development mode.

Changes

Cohort / File(s) Summary
System Prompt Refactoring
src/main/presenter/agentPresenter/message/messageBuilder.ts, src/main/presenter/agentPresenter/utility/promptEnhancer.ts
Consolidated prompt enhancement logic: removed browser context builder usage and workspace-dependent branching; replaced with single enhanceSystemPromptWithDateTime call accepting context options (isImageGeneration, isAgentMode, agentWorkspacePath). Updated promptEnhancer API from boolean flag to options object, added platform and workspace context computation, introduced formatCurrentDateTime() and formatPlatformName() helpers, modified control flow to compute runtime blocks conditionally.
Token Limit Calculation
src/renderer/src/utils/maxOutputTokens.ts, test/renderer/utils/maxOutputTokens.test.ts, src/renderer/src/components/NewThread.vue, src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
New utility module for safe token calculation with global cap (32000), factoring in model limits, thinking budget, and reasoning support. Replaced hard-coded token values (4096, 8192) across Vue components with dynamic calculateSafeDefaultMaxTokens, clamping to configured maximums. Added comprehensive test coverage for edge cases and real-world scenarios.
Model Query Abstraction
src/renderer/src/stores/modelStore.ts
Introduced ModelQueryHandle<TData> wrapper abstraction for query entries with derived data ref and standardized refresh/refetch helpers. Replaced direct UseQueryEntry storage with memoized query handles in providerModelQueries, customModelQueries, enabledModelQueries maps. Updated three query getter signatures to return ModelQueryHandle<T> instead of raw query types.
Development Tools
src/main/presenter/tabPresenter.ts
Added DevTools auto-opening for new tabs in development mode via openDevTools with 'detach' mode after tab content loads.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • deepchat#1252: Modifies buildPostToolExecutionContext in messageBuilder.ts, overlapping with post-tool-execution message composition changes in this PR.

Suggested reviewers

  • deepinfect

Poem

🐰 Hops through prompts with context so bright,
Token budgets calculated just right,
Query handles abstracted with care,
DevTools pop open—debugging's fair!
Browser baggage? Tossed! Code flows clean,
The simplest refactor you've ever seen!

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'better system prompt' is vague and doesn't clearly convey the specific changes made across multiple files including prompt enhancement, token calculation, tab DevTools behavior, and store refactoring. Consider a more specific title that captures the main focus, such as 'refactor: enhance system prompt with runtime context and improve token calculations' or breaking into multiple PRs if changes address distinct concerns.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/renderer/src/stores/modelStore.ts (3)

223-223: Non-English log message violates coding guidelines.

As per coding guidelines, all logs and comments must be in English. This Chinese error message should be translated.

🔧 Proposed fix
-      console.error(`读取模型配置失败: ${providerId}/${model.id}`, error)
+      console.error(`Failed to read model config: ${providerId}/${model.id}`, error)

296-296: Non-English log message violates coding guidelines.

This Chinese error message should be translated to English.

🔧 Proposed fix
-      console.error(`刷新自定义模型失败: ${providerId}`, error)
+      console.error(`Failed to refresh custom models: ${providerId}`, error)

434-434: Non-English log message violates coding guidelines.

This Chinese error message should be translated to English.

🔧 Proposed fix
-      console.error(`刷新标准模型失败: ${providerId}`, error)
+      console.error(`Failed to refresh standard models: ${providerId}`, error)
🤖 Fix all issues with AI agents
In @src/main/presenter/tabPresenter.ts:
- Line 219: Replace the Chinese inline comment "// DevTools 不再自动打开(避免在 macOS
全屏时产生额外窗口/空间的异常体验)" with an English comment, e.g. "// DevTools no longer open
automatically (avoids extra window/space issues in macOS fullscreen)", so all
comments follow the project's English-only guideline; locate and update this
comment in tabPresenter.ts near the DevTools/open behavior code.
- Around line 219-222: The comment above the is.dev check is misleading: it says
DevTools will no longer automatically open while the code actually opens
DevTools in development; update the comment to state that in development
(is.dev) DevTools are opened in detached mode to avoid macOS fullscreen creating
extra windows/spaces. Edit the comment immediately above
view.webContents.openDevTools({ mode: 'detach' }) to clearly state this intent
and reference that the condition is guarded by is.dev.

In @test/renderer/utils/maxOutputTokens.test.ts:
- Around line 49-56: The test description is incorrect for the assertion in the
it block that calls calculateSafeDefaultMaxTokens (modelMaxTokens: 200000,
reasoningSupported: true, thinkingBudget: 20000) and expects 12000; update the
it description string to accurately describe this scenario (e.g., mention budget
20k and expected 12k result) OR change the test inputs and expectation to match
the original description (use a 6000 user-config value and assert
expect(...).toBe(6000)); locate the test by the calculateSafeDefaultMaxTokens
usage in maxOutputTokens.test.ts and edit either the description text or the
inputs/expectation so description and implementation agree.
🧹 Nitpick comments (6)
src/renderer/src/stores/modelStore.ts (1)

126-139: Consider logging errors when swallowing them for better debugging.

When throwOnError is false, errors are silently caught and the current state is returned. This can hide issues during development. Consider adding optional error logging.

♻️ Proposed improvement
   const refresh = (throwOnError?: boolean) => {
     const promise = queryCache.refresh(entry)
-    return throwOnError ? promise : promise.catch(() => entry.state.value)
+    return throwOnError
+      ? promise
+      : promise.catch((error) => {
+          console.warn('[ModelStore] Query refresh failed:', error)
+          return entry.state.value
+        })
   }
   const refetch = (throwOnError?: boolean) => {
     const promise = queryCache.fetch(entry)
-    return throwOnError ? promise : promise.catch(() => entry.state.value)
+    return throwOnError
+      ? promise
+      : promise.catch((error) => {
+          console.warn('[ModelStore] Query refetch failed:', error)
+          return entry.state.value
+        })
   }
src/main/presenter/agentPresenter/utility/promptEnhancer.ts (1)

1-1: Consider exporting PlatformName for type consistency.

If consumers need to reference this type (e.g., for testing or type annotations), exporting it would be beneficial. Currently, it's internal-only.

src/main/presenter/agentPresenter/message/messageBuilder.ts (1)

293-298: Consider: Non-English prompt string for tool call fallback.

This Chinese prompt may be intentional for user experience, but if the codebase targets English-speaking LLMs or international audiences, consider providing an English version or making this localizable.

As per coding guidelines: "Use English for logs and comments in TypeScript/JavaScript code" - though this is a prompt string rather than a comment/log.

src/renderer/src/utils/maxOutputTokens.ts (1)

9-29: LGTM! Function logic is correct and well-documented.

The implementation correctly:

  • Applies the global cap to model max tokens
  • Reserves space for thinking budget when reasoning is supported
  • Returns appropriate values for all edge cases

The JSDoc clearly explains the function's purpose and behavior.

♻️ Optional: Remove redundant safety check

Line 23's Math.max(0, thinkingBudget) is redundant since line 22 already ensures thinkingBudget > 0. Consider simplifying:

   if (reasoningSupported && thinkingBudget !== undefined && thinkingBudget > 0) {
-    const safeThinkingBudget = Math.max(0, thinkingBudget)
-    const textTokens = Math.max(0, modelCap - safeThinkingBudget)
+    const textTokens = Math.max(0, modelCap - thinkingBudget)
     return textTokens
   }

The existing code is defensive and doesn't cause issues, so this is purely optional.

src/renderer/src/components/NewThread.vue (1)

207-217: Safe default calculation is correct, but includes defensive redundancy.

The logic correctly calculates and applies the safe default max tokens. However, lines 215-217 appear to be redundant because calculateSafeDefaultMaxTokens already ensures the result doesn't exceed config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX, so the condition on line 215 should never be true.

♻️ Optional: Remove redundant clamping

The clamping check is defensive but unnecessary since safeDefaultMaxTokens is already constrained:

     const safeDefaultMaxTokens = calculateSafeDefaultMaxTokens({
       modelMaxTokens: config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX,
       thinkingBudget: config.thinkingBudget,
       reasoningSupported: Boolean(config.reasoning)
     })

     maxTokens.value = safeDefaultMaxTokens
-
-    if (maxTokens.value > (config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX)) {
-      maxTokens.value = config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX
-    }

The existing code is safe and doesn't cause issues, so this is purely optional.

src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)

111-125: LGTM! Token limit calculation is correctly implemented.

The logic properly:

  1. Calculates the safe default respecting model limits and thinking budgets
  2. Enforces a sensible minimum (1024 tokens)
  3. Caps by the model's maximum limit

The sequential checks ensure the final value is always within valid bounds.

♻️ Optional: Simplify constraint logic

Consider combining the min/max constraints for clarity:

     const safeDefaultMaxTokens = calculateSafeDefaultMaxTokens({
       modelMaxTokens: config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX,
       thinkingBudget: config.thinkingBudget,
       reasoningSupported: Boolean(config.reasoning)
     })

-    configMaxTokens.value = safeDefaultMaxTokens
-
-    if (configMaxTokens.value < 1024) {
-      configMaxTokens.value = 1024
-    }
-
-    if (configMaxTokensLimit.value && configMaxTokens.value > configMaxTokensLimit.value) {
-      configMaxTokens.value = configMaxTokensLimit.value
-    }
+    // Apply constraints: minimum 1024, maximum from model limit
+    const minTokens = 1024
+    const maxTokens = configMaxTokensLimit.value || safeDefaultMaxTokens
+    configMaxTokens.value = Math.min(Math.max(safeDefaultMaxTokens, minTokens), maxTokens)

The existing code is clear and works correctly, so this is purely optional.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f2f17ae and b6bba48.

📒 Files selected for processing (8)
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/main/presenter/tabPresenter.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • test/renderer/utils/maxOutputTokens.test.ts
🧰 Additional context used
📓 Path-based instructions (21)
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments in TypeScript/JavaScript code

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use TypeScript with strict type checking enabled

Use OxLint for linting JavaScript and TypeScript files; ensure lint-staged hooks and typecheck pass before commits

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
test/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Place test files in test/ directory with corresponding structure to source files

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
test/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use Vitest as the testing framework for unit and integration tests

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
**/*.{js,ts,tsx,jsx,vue,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

All logs and comments must be in English

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{js,ts,tsx,jsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Use OxLint as the linter

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{js,ts,tsx,jsx,vue,json,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Use Prettier as the code formatter

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
test/**/*.test.ts

📄 CodeRabbit inference engine (AGENTS.md)

Vitest test suites should be organized in test/main/** and test/renderer/** mirroring source structure, with file names following *.test.ts or *.spec.ts pattern

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,vue}: Use camelCase for variable and function names; use PascalCase for types and classes; use SCREAMING_SNAKE_CASE for constants
Configure Prettier with single quotes, no semicolons, and line width of 100 characters. Run pnpm run format after completing features

Files:

  • test/renderer/utils/maxOutputTokens.test.ts
  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/renderer/src/stores/modelStore.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use usePresenter.ts composable for renderer-to-main IPC communication via direct presenter method calls

Ensure all code comments are in English and all log messages are in English, with no non-English text in code comments or console statements

Use VueUse composables for common utilities like useLocalStorage, useClipboard, useDebounceFn

Vue 3 renderer app code should be organized in src/renderer/src with subdirectories for components/, stores/, views/, i18n/, and lib/

Files:

  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{vue,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*.{vue,ts,tsx}: Use vue-i18n framework for internationalization located at src/renderer/src/i18n/
All user-facing strings must use i18n keys, not hardcoded text

src/renderer/src/**/*.{vue,ts,tsx}: Use ref for primitives and references, reactive for objects in Vue 3 Composition API
Prefer computed properties over methods for derived state in Vue components
Import Shadcn Vue components from @/shadcn/components/ui/ path alias
Use the cn() utility function combining clsx and tailwind-merge for dynamic Tailwind classes
Use defineAsyncComponent() for lazy loading heavy Vue components
Use TypeScript for all Vue components and composables with explicit type annotations
Define TypeScript interfaces for Vue component props and data structures
Use usePresenter composable for main process communication instead of direct IPC calls

Files:

  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/NewThread.vue
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)

Use class-variance-authority (CVA) for defining component variants with Tailwind classes

Files:

  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)

src/renderer/src/**/*.{ts,tsx}: Use shallowRef and shallowReactive for optimizing reactivity with large objects
Prefer type over interface in TypeScript unless using inheritance with extends

Files:

  • src/renderer/src/utils/maxOutputTokens.ts
  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
src/renderer/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/**/*.vue: Use Vue 3 Composition API for all components
Use Tailwind CSS for styling with scoped styles
All user-facing strings must use i18n keys via vue-i18n

Files:

  • src/renderer/src/components/NewThread.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

Import useI18n from vue-i18n in Vue components to access translation functions t and locale

src/renderer/src/**/*.vue: Use <script setup> syntax for concise Vue 3 component definitions with Composition API
Define props and emits explicitly in Vue components using defineProps and defineEmits with TypeScript interfaces
Use provide/inject for dependency injection in Vue components instead of prop drilling
Use Tailwind CSS for all styling instead of writing scoped CSS files
Use mobile-first responsive design approach with Tailwind breakpoints
Use Iconify Vue with lucide icons as primary choice, following pattern lucide:{icon-name}
Use v-memo directive for memoizing expensive computations in templates
Use v-once directive for rendering static content without reactivity updates
Use virtual scrolling with RecycleScroller component for rendering long lists
Subscribe to events using rendererEvents.on() and unsubscribe in onUnmounted lifecycle hook

Files:

  • src/renderer/src/components/NewThread.vue
src/renderer/src/components/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)

Name Vue components using PascalCase (e.g., ChatInput.vue, MessageItemUser.vue)

Files:

  • src/renderer/src/components/NewThread.vue
**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

Vue components must be named in PascalCase (e.g., ChatInput.vue) and use Vue 3 Composition API with Pinia for state management and Tailwind for styling

Files:

  • src/renderer/src/components/NewThread.vue
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/**/*.ts: Use EventBus to broadcast events from main to renderer via mainWindow.webContents.send()
Implement one presenter per functional domain in the main process

Files:

  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/**/*.ts: Use EventBus from src/main/eventbus.ts for decoupled inter-process communication
Context isolation must be enabled with preload scripts for secure IPC communication

Electron main process code should reside in src/main/, with presenters organized in presenter/ subdirectory (Window, Tab, Thread, Mcp, Config, LLMProvider), and app events managed via eventbus.ts

Files:

  • src/main/presenter/tabPresenter.ts
  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
src/renderer/src/**/stores/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use Pinia for frontend state management

Files:

  • src/renderer/src/stores/modelStore.ts
src/renderer/src/stores/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)

src/renderer/src/stores/**/*.ts: Use Setup Store syntax with defineStore function pattern in Pinia stores
Use getters (computed properties) for derived state in Pinia stores
Keep Pinia store actions focused on state mutations and async operations

Files:

  • src/renderer/src/stores/modelStore.ts
🧠 Learnings (17)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Custom prompts are managed independently of MCP through config data source using `configPresenter.getCustomPrompts()`
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use `usePresenter.ts` composable for renderer-to-main IPC communication via direct presenter method calls

Applied to files:

  • src/renderer/src/components/NewThread.vue
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Use `usePresenter` composable for main process communication instead of direct IPC calls

Applied to files:

  • src/renderer/src/components/NewThread.vue
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use VueUse composables for common utilities like `useLocalStorage`, `useClipboard`, `useDebounceFn`

Applied to files:

  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/composables/**/*.ts : Name composables using camelCase with `use` prefix (e.g., `useChatState.ts`, `useMessageList.ts`)

Applied to files:

  • src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
📚 Learning: 2025-06-21T15:48:29.950Z
Learnt from: neoragex2002
Repo: ThinkInAIXYZ/deepchat PR: 550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:250-252
Timestamp: 2025-06-21T15:48:29.950Z
Learning: In the meeting server implementation (src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts), when multiple tabs have the same title, the user prefers to let the code silently select the first match without adding warnings or additional ambiguity handling.

Applied to files:

  • src/main/presenter/tabPresenter.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Custom prompts are managed independently of MCP through config data source using `configPresenter.getCustomPrompts()`

Applied to files:

  • src/main/presenter/agentPresenter/utility/promptEnhancer.ts
  • src/main/presenter/agentPresenter/message/messageBuilder.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Use getters (computed properties) for derived state in Pinia stores

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/renderer/src/**/stores/*.ts : Use Pinia for frontend state management

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Use Setup Store syntax with `defineStore` function pattern in Pinia stores

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Keep Pinia store actions focused on state mutations and async operations

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Use `ref` for primitives and references, `reactive` for objects in Vue 3 Composition API

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Prefer `computed` properties over methods for derived state in Vue components

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.vue : Use `v-memo` directive for memoizing expensive computations in templates

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Import Shadcn Vue components from `@/shadcn/components/ui/` path alias

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.vue : Use `<script setup>` syntax for concise Vue 3 component definitions with Composition API

Applied to files:

  • src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/mcpPresenter/inMemoryServers/**/*.ts : MCP tool implementations must be registered in `mcpPresenter/index.ts` after implementation

Applied to files:

  • src/main/presenter/agentPresenter/message/messageBuilder.ts
🧬 Code graph analysis (5)
test/renderer/utils/maxOutputTokens.test.ts (1)
src/renderer/src/utils/maxOutputTokens.ts (2)
  • calculateSafeDefaultMaxTokens (15-29)
  • GLOBAL_OUTPUT_TOKEN_MAX (31-31)
src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)
src/renderer/src/utils/maxOutputTokens.ts (2)
  • calculateSafeDefaultMaxTokens (15-29)
  • GLOBAL_OUTPUT_TOKEN_MAX (31-31)
src/main/presenter/tabPresenter.ts (1)
test/mocks/electron-toolkit-utils.ts (1)
  • is (1-3)
src/renderer/src/stores/modelStore.ts (3)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • MODEL_META (703-718)
  • RENDERER_MODEL_META (687-702)
src/shared/types/presenters/index.d.ts (2)
  • MODEL_META (11-11)
  • RENDERER_MODEL_META (12-12)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
  • MODEL_META (29-44)
  • RENDERER_MODEL_META (12-27)
src/main/presenter/agentPresenter/message/messageBuilder.ts (1)
src/main/presenter/agentPresenter/utility/promptEnhancer.ts (1)
  • enhanceSystemPromptWithDateTime (30-61)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (26)
src/renderer/src/stores/modelStore.ts (5)

1-3: LGTM!

The new imports are correctly added to support the ModelQueryHandle abstraction with proper Vue reactivity (computed, ComputedRef) and Pinia Colada types (DataState, EntryKey, UseQueryEntry).


18-23: LGTM!

The ModelQueryHandle<TData> type provides a clean abstraction that decouples consumers from the underlying UseQueryEntry implementation details. Using type over interface aligns with the coding guidelines.


37-39: LGTM!

The query maps correctly use ModelQueryHandle<T> for memoization. Since template bindings use the reactive data ComputedRef within each handle rather than the Map itself, plain Maps are appropriate here.


141-156: LGTM!

The ensureQueryHandle helper correctly implements memoization with identity checking. When the cache entry is invalidated and recreated, the identity check existing?.entry === entry ensures a new handle is created, preventing stale references.


158-191: LGTM!

The query getter functions are cleanly refactored to use the new ensureQueryHandle pattern, maintaining consistent behavior while improving code organization. The query logic and staleTime values remain appropriate.

src/main/presenter/agentPresenter/utility/promptEnhancer.ts (4)

3-14: LGTM!

The formatCurrentDateTime function provides a consistent, unambiguous date/time format suitable for LLM context. Using 'en-US' locale ensures predictable output regardless of system settings.


16-21: LGTM!

Clean mapping of Node.js platform identifiers to human-readable names with appropriate fallback.


23-28: LGTM!

Well-designed options interface with sensible optional properties. The platform override enables testability without mocking process.platform.


30-61: LGTM!

The function cleanly handles all cases with proper defaults and early returns. The conditional runtime context building is well-structured.

One minor note: agentWorkspacePath is trimmed both here (line 51) and at the call site in messageBuilder.ts (line 95). This is defensive and harmless, but you could document that this function handles untrimmed input.

src/main/presenter/agentPresenter/message/messageBuilder.ts (5)

1-20: LGTM!

Import cleanup is appropriate with the removal of browser context dependencies.


92-96: LGTM!

Clean migration to the options-based API. The ?.trim() || null pattern correctly handles undefined, null, and empty string cases, converting all to null for consistency.


117-118: LGTM!

Token calculation correctly uses finalSystemPrompt now that browser context augmentation has been removed.


192-198: Verify: Should agent mode context be preserved in tool call continuation?

The call uses an empty options object {}, meaning isAgentMode defaults to false and no agentWorkspacePath is included. However, conversation.settings contains chatMode and agentWorkspacePath. If this is a tool call continuation in agent mode, the runtime context (working directory info) might be relevant for the LLM to understand the execution environment.

Was this intentional to keep tool continuations simpler, or should it mirror the initial context?


226-232: Same concern: agent mode context not propagated.

Same as buildContinueToolCallContext - empty options means agent mode context (working directory) is not included. This should be consistent with your design intent for tool execution contexts.

src/main/presenter/tabPresenter.ts (1)

220-222: Verify if DevTools should open for every tab in development.

The current implementation opens DevTools for every tab created in development mode. This could result in multiple DevTools windows if the user creates multiple tabs, which may be disruptive.

Consider whether:

  1. DevTools should only open for the first tab in each window
  2. This behavior is intentionally changed to always open for debugging purposes
  3. A configuration option should control this behavior

Also note that this change seems unrelated to the PR title "feat: better system prompt" - please confirm this is an intentional inclusion.

test/renderer/utils/maxOutputTokens.test.ts (5)

5-37: LGTM! Base cases are well-covered.

The test cases correctly verify the capping behavior:

  • Models exceeding the global limit are capped at 32000
  • Models below the limit preserve their native maxTokens
  • Boundary case (exactly 32000) is handled correctly

77-86: LGTM! Correct behavior when reasoning is not supported.

The test correctly verifies that thinkingBudget is ignored when reasoningSupported is false, returning the capped model limit.


88-133: LGTM! Edge cases are thoroughly tested.

The test suite covers important boundary conditions:

  • Zero and undefined budgets return the full capped limit
  • Negative budgets are safely handled (treated as zero)
  • Budget equaling model cap correctly returns zero text tokens
  • Small models with budgets calculate correctly

135-168: LGTM! Real-world scenarios provide excellent integration coverage.

The test suite validates practical use cases that users will encounter:

  • New conversations with various model types
  • Reasoning models with thinking budgets
  • Model switching scenarios

171-175: LGTM! Constant value is verified.

Simple and effective test ensuring GLOBAL_OUTPUT_TOKEN_MAX has the expected value.

src/renderer/src/utils/maxOutputTokens.ts (3)

1-1: LGTM! Reasonable global cap for output tokens.

The 32000 token limit is a sensible safety cap that prevents excessive token generation while accommodating most use cases.


3-7: LGTM! Interface is well-designed.

The interface clearly defines the required parameters with appropriate optionality:

  • Required fields capture essential configuration
  • Optional thinkingBudget aligns with conditional reasoning logic

31-31: LGTM! Exports are properly structured.

Both the function and constant are correctly exported for public use.

src/renderer/src/components/NewThread.vue (2)

126-126: LGTM! Import statement is correct.

Properly imports the new utility function and constant from the utils module.


159-160: LGTM! Initial values now use the global constant.

Replacing hard-coded 4096 with GLOBAL_OUTPUT_TOKEN_MAX improves consistency and maintainability. The higher initial limit (32000) is more appropriate as it will be constrained by model-specific limits during configuration loading.

src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)

16-17: LGTM! Import section is well-organized.

The utils import follows the file's existing organization pattern and correctly imports the necessary function and constant.

@zerob13 zerob13 merged commit 0b0d392 into dev Jan 7, 2026
2 checks passed
zerob13 added a commit that referenced this pull request Jan 8, 2026
* refactor(agent): enhance system prompt with runtime context and remove browser injection

* feat(renderer): add smart default maxTokens calculation with 32k cap

- Add helper function to calculate safe default maxTokens
- Apply 32k global limit as safety cap
- Reserve space for thinking budget when reasoning is supported
- Update both Chat and NewThread modes to use smart defaults
- Remove hardcoded 8192 threshold logic
- Add comprehensive tests for the calculation logic

* fix: colada warning
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants