-
Notifications
You must be signed in to change notification settings - Fork 625
feat: better system prompt #1258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…e browser injection
- Add helper function to calculate safe default maxTokens - Apply 32k global limit as safety cap - Reserve space for thinking budget when reasoning is supported - Update both Chat and NewThread modes to use smart defaults - Remove hardcoded 8192 threshold logic - Add comprehensive tests for the calculation logic
📝 WalkthroughWalkthroughThe PR refactors system prompt construction to be data-driven by passing context options to a unified enhancement function, removes browser context dependencies from agent prompts, introduces safe token limit calculations respecting model and reasoning constraints, abstracts model query handling with a new handle wrapper, and auto-opens DevTools for tabs in development mode. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/renderer/src/stores/modelStore.ts (3)
223-223: Non-English log message violates coding guidelines.As per coding guidelines, all logs and comments must be in English. This Chinese error message should be translated.
🔧 Proposed fix
- console.error(`读取模型配置失败: ${providerId}/${model.id}`, error) + console.error(`Failed to read model config: ${providerId}/${model.id}`, error)
296-296: Non-English log message violates coding guidelines.This Chinese error message should be translated to English.
🔧 Proposed fix
- console.error(`刷新自定义模型失败: ${providerId}`, error) + console.error(`Failed to refresh custom models: ${providerId}`, error)
434-434: Non-English log message violates coding guidelines.This Chinese error message should be translated to English.
🔧 Proposed fix
- console.error(`刷新标准模型失败: ${providerId}`, error) + console.error(`Failed to refresh standard models: ${providerId}`, error)
🤖 Fix all issues with AI agents
In @src/main/presenter/tabPresenter.ts:
- Line 219: Replace the Chinese inline comment "// DevTools 不再自动打开(避免在 macOS
全屏时产生额外窗口/空间的异常体验)" with an English comment, e.g. "// DevTools no longer open
automatically (avoids extra window/space issues in macOS fullscreen)", so all
comments follow the project's English-only guideline; locate and update this
comment in tabPresenter.ts near the DevTools/open behavior code.
- Around line 219-222: The comment above the is.dev check is misleading: it says
DevTools will no longer automatically open while the code actually opens
DevTools in development; update the comment to state that in development
(is.dev) DevTools are opened in detached mode to avoid macOS fullscreen creating
extra windows/spaces. Edit the comment immediately above
view.webContents.openDevTools({ mode: 'detach' }) to clearly state this intent
and reference that the condition is guarded by is.dev.
In @test/renderer/utils/maxOutputTokens.test.ts:
- Around line 49-56: The test description is incorrect for the assertion in the
it block that calls calculateSafeDefaultMaxTokens (modelMaxTokens: 200000,
reasoningSupported: true, thinkingBudget: 20000) and expects 12000; update the
it description string to accurately describe this scenario (e.g., mention budget
20k and expected 12k result) OR change the test inputs and expectation to match
the original description (use a 6000 user-config value and assert
expect(...).toBe(6000)); locate the test by the calculateSafeDefaultMaxTokens
usage in maxOutputTokens.test.ts and edit either the description text or the
inputs/expectation so description and implementation agree.
🧹 Nitpick comments (6)
src/renderer/src/stores/modelStore.ts (1)
126-139: Consider logging errors when swallowing them for better debugging.When
throwOnErrorisfalse, errors are silently caught and the current state is returned. This can hide issues during development. Consider adding optional error logging.♻️ Proposed improvement
const refresh = (throwOnError?: boolean) => { const promise = queryCache.refresh(entry) - return throwOnError ? promise : promise.catch(() => entry.state.value) + return throwOnError + ? promise + : promise.catch((error) => { + console.warn('[ModelStore] Query refresh failed:', error) + return entry.state.value + }) } const refetch = (throwOnError?: boolean) => { const promise = queryCache.fetch(entry) - return throwOnError ? promise : promise.catch(() => entry.state.value) + return throwOnError + ? promise + : promise.catch((error) => { + console.warn('[ModelStore] Query refetch failed:', error) + return entry.state.value + }) }src/main/presenter/agentPresenter/utility/promptEnhancer.ts (1)
1-1: Consider exportingPlatformNamefor type consistency.If consumers need to reference this type (e.g., for testing or type annotations), exporting it would be beneficial. Currently, it's internal-only.
src/main/presenter/agentPresenter/message/messageBuilder.ts (1)
293-298: Consider: Non-English prompt string for tool call fallback.This Chinese prompt may be intentional for user experience, but if the codebase targets English-speaking LLMs or international audiences, consider providing an English version or making this localizable.
As per coding guidelines: "Use English for logs and comments in TypeScript/JavaScript code" - though this is a prompt string rather than a comment/log.
src/renderer/src/utils/maxOutputTokens.ts (1)
9-29: LGTM! Function logic is correct and well-documented.The implementation correctly:
- Applies the global cap to model max tokens
- Reserves space for thinking budget when reasoning is supported
- Returns appropriate values for all edge cases
The JSDoc clearly explains the function's purpose and behavior.
♻️ Optional: Remove redundant safety check
Line 23's
Math.max(0, thinkingBudget)is redundant since line 22 already ensuresthinkingBudget > 0. Consider simplifying:if (reasoningSupported && thinkingBudget !== undefined && thinkingBudget > 0) { - const safeThinkingBudget = Math.max(0, thinkingBudget) - const textTokens = Math.max(0, modelCap - safeThinkingBudget) + const textTokens = Math.max(0, modelCap - thinkingBudget) return textTokens }The existing code is defensive and doesn't cause issues, so this is purely optional.
src/renderer/src/components/NewThread.vue (1)
207-217: Safe default calculation is correct, but includes defensive redundancy.The logic correctly calculates and applies the safe default max tokens. However, lines 215-217 appear to be redundant because
calculateSafeDefaultMaxTokensalready ensures the result doesn't exceedconfig.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX, so the condition on line 215 should never be true.♻️ Optional: Remove redundant clamping
The clamping check is defensive but unnecessary since
safeDefaultMaxTokensis already constrained:const safeDefaultMaxTokens = calculateSafeDefaultMaxTokens({ modelMaxTokens: config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX, thinkingBudget: config.thinkingBudget, reasoningSupported: Boolean(config.reasoning) }) maxTokens.value = safeDefaultMaxTokens - - if (maxTokens.value > (config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX)) { - maxTokens.value = config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX - }The existing code is safe and doesn't cause issues, so this is purely optional.
src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)
111-125: LGTM! Token limit calculation is correctly implemented.The logic properly:
- Calculates the safe default respecting model limits and thinking budgets
- Enforces a sensible minimum (1024 tokens)
- Caps by the model's maximum limit
The sequential checks ensure the final value is always within valid bounds.
♻️ Optional: Simplify constraint logic
Consider combining the min/max constraints for clarity:
const safeDefaultMaxTokens = calculateSafeDefaultMaxTokens({ modelMaxTokens: config.maxTokens || GLOBAL_OUTPUT_TOKEN_MAX, thinkingBudget: config.thinkingBudget, reasoningSupported: Boolean(config.reasoning) }) - configMaxTokens.value = safeDefaultMaxTokens - - if (configMaxTokens.value < 1024) { - configMaxTokens.value = 1024 - } - - if (configMaxTokensLimit.value && configMaxTokens.value > configMaxTokensLimit.value) { - configMaxTokens.value = configMaxTokensLimit.value - } + // Apply constraints: minimum 1024, maximum from model limit + const minTokens = 1024 + const maxTokens = configMaxTokensLimit.value || safeDefaultMaxTokens + configMaxTokens.value = Math.min(Math.max(safeDefaultMaxTokens, minTokens), maxTokens)The existing code is clear and works correctly, so this is purely optional.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/main/presenter/agentPresenter/message/messageBuilder.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/main/presenter/tabPresenter.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.tssrc/renderer/src/utils/maxOutputTokens.tstest/renderer/utils/maxOutputTokens.test.ts
🧰 Additional context used
📓 Path-based instructions (21)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments in TypeScript/JavaScript code
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use TypeScript with strict type checking enabled
Use OxLint for linting JavaScript and TypeScript files; ensure lint-staged hooks and typecheck pass before commits
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
test/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Place test files in
test/directory with corresponding structure to source files
Files:
test/renderer/utils/maxOutputTokens.test.ts
test/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Vitest as the testing framework for unit and integration tests
Files:
test/renderer/utils/maxOutputTokens.test.ts
**/*.{js,ts,tsx,jsx,vue,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
All logs and comments must be in English
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{js,ts,tsx,jsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Use OxLint as the linter
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
**/*.{js,ts,tsx,jsx,vue,json,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Use Prettier as the code formatter
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
test/**/*.test.ts
📄 CodeRabbit inference engine (AGENTS.md)
Vitest test suites should be organized in
test/main/**andtest/renderer/**mirroring source structure, with file names following*.test.tsor*.spec.tspattern
Files:
test/renderer/utils/maxOutputTokens.test.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,vue}: Use camelCase for variable and function names; use PascalCase for types and classes; use SCREAMING_SNAKE_CASE for constants
Configure Prettier with single quotes, no semicolons, and line width of 100 characters. Runpnpm run formatafter completing features
Files:
test/renderer/utils/maxOutputTokens.test.tssrc/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/renderer/src/stores/modelStore.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
usePresenter.tscomposable for renderer-to-main IPC communication via direct presenter method callsEnsure all code comments are in English and all log messages are in English, with no non-English text in code comments or console statements
Use VueUse composables for common utilities like
useLocalStorage,useClipboard,useDebounceFnVue 3 renderer app code should be organized in
src/renderer/srcwith subdirectories forcomponents/,stores/,views/,i18n/, andlib/
Files:
src/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{vue,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*.{vue,ts,tsx}: Use vue-i18n framework for internationalization located at src/renderer/src/i18n/
All user-facing strings must use i18n keys, not hardcoded text
src/renderer/src/**/*.{vue,ts,tsx}: Usereffor primitives and references,reactivefor objects in Vue 3 Composition API
Prefercomputedproperties over methods for derived state in Vue components
Import Shadcn Vue components from@/shadcn/components/ui/path alias
Use thecn()utility function combining clsx and tailwind-merge for dynamic Tailwind classes
UsedefineAsyncComponent()for lazy loading heavy Vue components
Use TypeScript for all Vue components and composables with explicit type annotations
Define TypeScript interfaces for Vue component props and data structures
UseusePresentercomposable for main process communication instead of direct IPC calls
Files:
src/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)
Use class-variance-authority (CVA) for defining component variants with Tailwind classes
Files:
src/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.ts
src/renderer/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)
src/renderer/src/**/*.{ts,tsx}: UseshallowRefandshallowReactivefor optimizing reactivity with large objects
Prefertypeoverinterfacein TypeScript unless using inheritance withextends
Files:
src/renderer/src/utils/maxOutputTokens.tssrc/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.ts
src/renderer/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/**/*.vue: Use Vue 3 Composition API for all components
Use Tailwind CSS for styling with scoped styles
All user-facing strings must use i18n keys via vue-i18n
Files:
src/renderer/src/components/NewThread.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
Import useI18n from vue-i18n in Vue components to access translation functions t and locale
src/renderer/src/**/*.vue: Use<script setup>syntax for concise Vue 3 component definitions with Composition API
Define props and emits explicitly in Vue components usingdefinePropsanddefineEmitswith TypeScript interfaces
Useprovide/injectfor dependency injection in Vue components instead of prop drilling
Use Tailwind CSS for all styling instead of writing scoped CSS files
Use mobile-first responsive design approach with Tailwind breakpoints
Use Iconify Vue with lucide icons as primary choice, following patternlucide:{icon-name}
Usev-memodirective for memoizing expensive computations in templates
Usev-oncedirective for rendering static content without reactivity updates
Use virtual scrolling withRecycleScrollercomponent for rendering long lists
Subscribe to events usingrendererEvents.on()and unsubscribe inonUnmountedlifecycle hook
Files:
src/renderer/src/components/NewThread.vue
src/renderer/src/components/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)
Name Vue components using PascalCase (e.g.,
ChatInput.vue,MessageItemUser.vue)
Files:
src/renderer/src/components/NewThread.vue
**/*.vue
📄 CodeRabbit inference engine (AGENTS.md)
Vue components must be named in PascalCase (e.g.,
ChatInput.vue) and use Vue 3 Composition API with Pinia for state management and Tailwind for styling
Files:
src/renderer/src/components/NewThread.vue
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/main/presenter/**/*.ts: Use EventBus to broadcast events from main to renderer viamainWindow.webContents.send()
Implement one presenter per functional domain in the main process
Files:
src/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/main/**/*.ts: Use EventBus fromsrc/main/eventbus.tsfor decoupled inter-process communication
Context isolation must be enabled with preload scripts for secure IPC communicationElectron main process code should reside in
src/main/, with presenters organized inpresenter/subdirectory (Window, Tab, Thread, Mcp, Config, LLMProvider), and app events managed viaeventbus.ts
Files:
src/main/presenter/tabPresenter.tssrc/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
src/renderer/src/**/stores/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use Pinia for frontend state management
Files:
src/renderer/src/stores/modelStore.ts
src/renderer/src/stores/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/vue-stack-guide.mdc)
src/renderer/src/stores/**/*.ts: Use Setup Store syntax withdefineStorefunction pattern in Pinia stores
Use getters (computed properties) for derived state in Pinia stores
Keep Pinia store actions focused on state mutations and async operations
Files:
src/renderer/src/stores/modelStore.ts
🧠 Learnings (17)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Custom prompts are managed independently of MCP through config data source using `configPresenter.getCustomPrompts()`
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use `usePresenter.ts` composable for renderer-to-main IPC communication via direct presenter method calls
Applied to files:
src/renderer/src/components/NewThread.vue
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Use `usePresenter` composable for main process communication instead of direct IPC calls
Applied to files:
src/renderer/src/components/NewThread.vue
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use VueUse composables for common utilities like `useLocalStorage`, `useClipboard`, `useDebounceFn`
Applied to files:
src/renderer/src/components/chat-input/composables/usePromptInputConfig.tssrc/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/composables/**/*.ts : Name composables using camelCase with `use` prefix (e.g., `useChatState.ts`, `useMessageList.ts`)
Applied to files:
src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts
📚 Learning: 2025-06-21T15:48:29.950Z
Learnt from: neoragex2002
Repo: ThinkInAIXYZ/deepchat PR: 550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:250-252
Timestamp: 2025-06-21T15:48:29.950Z
Learning: In the meeting server implementation (src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts), when multiple tabs have the same title, the user prefers to let the code silently select the first match without adding warnings or additional ambiguity handling.
Applied to files:
src/main/presenter/tabPresenter.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Custom prompts are managed independently of MCP through config data source using `configPresenter.getCustomPrompts()`
Applied to files:
src/main/presenter/agentPresenter/utility/promptEnhancer.tssrc/main/presenter/agentPresenter/message/messageBuilder.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Use getters (computed properties) for derived state in Pinia stores
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/renderer/src/**/stores/*.ts : Use Pinia for frontend state management
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Use Setup Store syntax with `defineStore` function pattern in Pinia stores
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/stores/**/*.ts : Keep Pinia store actions focused on state mutations and async operations
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Use `ref` for primitives and references, `reactive` for objects in Vue 3 Composition API
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Prefer `computed` properties over methods for derived state in Vue components
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.vue : Use `v-memo` directive for memoizing expensive computations in templates
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Import Shadcn Vue components from `@/shadcn/components/ui/` path alias
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:41:31.619Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-stack-guide.mdc:0-0
Timestamp: 2026-01-05T02:41:31.619Z
Learning: Applies to src/renderer/src/**/*.vue : Use `<script setup>` syntax for concise Vue 3 component definitions with Composition API
Applied to files:
src/renderer/src/stores/modelStore.ts
📚 Learning: 2026-01-05T02:40:52.831Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-05T02:40:52.831Z
Learning: Applies to src/main/presenter/mcpPresenter/inMemoryServers/**/*.ts : MCP tool implementations must be registered in `mcpPresenter/index.ts` after implementation
Applied to files:
src/main/presenter/agentPresenter/message/messageBuilder.ts
🧬 Code graph analysis (5)
test/renderer/utils/maxOutputTokens.test.ts (1)
src/renderer/src/utils/maxOutputTokens.ts (2)
calculateSafeDefaultMaxTokens(15-29)GLOBAL_OUTPUT_TOKEN_MAX(31-31)
src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)
src/renderer/src/utils/maxOutputTokens.ts (2)
calculateSafeDefaultMaxTokens(15-29)GLOBAL_OUTPUT_TOKEN_MAX(31-31)
src/main/presenter/tabPresenter.ts (1)
test/mocks/electron-toolkit-utils.ts (1)
is(1-3)
src/renderer/src/stores/modelStore.ts (3)
src/shared/types/presenters/legacy.presenters.d.ts (2)
MODEL_META(703-718)RENDERER_MODEL_META(687-702)src/shared/types/presenters/index.d.ts (2)
MODEL_META(11-11)RENDERER_MODEL_META(12-12)src/shared/types/presenters/llmprovider.presenter.d.ts (2)
MODEL_META(29-44)RENDERER_MODEL_META(12-27)
src/main/presenter/agentPresenter/message/messageBuilder.ts (1)
src/main/presenter/agentPresenter/utility/promptEnhancer.ts (1)
enhanceSystemPromptWithDateTime(30-61)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (26)
src/renderer/src/stores/modelStore.ts (5)
1-3: LGTM!The new imports are correctly added to support the
ModelQueryHandleabstraction with proper Vue reactivity (computed,ComputedRef) and Pinia Colada types (DataState,EntryKey,UseQueryEntry).
18-23: LGTM!The
ModelQueryHandle<TData>type provides a clean abstraction that decouples consumers from the underlyingUseQueryEntryimplementation details. Usingtypeoverinterfacealigns with the coding guidelines.
37-39: LGTM!The query maps correctly use
ModelQueryHandle<T>for memoization. Since template bindings use the reactivedataComputedRef within each handle rather than the Map itself, plain Maps are appropriate here.
141-156: LGTM!The
ensureQueryHandlehelper correctly implements memoization with identity checking. When the cache entry is invalidated and recreated, the identity checkexisting?.entry === entryensures a new handle is created, preventing stale references.
158-191: LGTM!The query getter functions are cleanly refactored to use the new
ensureQueryHandlepattern, maintaining consistent behavior while improving code organization. The query logic and staleTime values remain appropriate.src/main/presenter/agentPresenter/utility/promptEnhancer.ts (4)
3-14: LGTM!The
formatCurrentDateTimefunction provides a consistent, unambiguous date/time format suitable for LLM context. Using 'en-US' locale ensures predictable output regardless of system settings.
16-21: LGTM!Clean mapping of Node.js platform identifiers to human-readable names with appropriate fallback.
23-28: LGTM!Well-designed options interface with sensible optional properties. The
platformoverride enables testability without mockingprocess.platform.
30-61: LGTM!The function cleanly handles all cases with proper defaults and early returns. The conditional runtime context building is well-structured.
One minor note:
agentWorkspacePathis trimmed both here (line 51) and at the call site inmessageBuilder.ts(line 95). This is defensive and harmless, but you could document that this function handles untrimmed input.src/main/presenter/agentPresenter/message/messageBuilder.ts (5)
1-20: LGTM!Import cleanup is appropriate with the removal of browser context dependencies.
92-96: LGTM!Clean migration to the options-based API. The
?.trim() || nullpattern correctly handles undefined, null, and empty string cases, converting all tonullfor consistency.
117-118: LGTM!Token calculation correctly uses
finalSystemPromptnow that browser context augmentation has been removed.
192-198: Verify: Should agent mode context be preserved in tool call continuation?The call uses an empty options object
{}, meaningisAgentModedefaults tofalseand noagentWorkspacePathis included. However,conversation.settingscontainschatModeandagentWorkspacePath. If this is a tool call continuation in agent mode, the runtime context (working directory info) might be relevant for the LLM to understand the execution environment.Was this intentional to keep tool continuations simpler, or should it mirror the initial context?
226-232: Same concern: agent mode context not propagated.Same as
buildContinueToolCallContext- empty options means agent mode context (working directory) is not included. This should be consistent with your design intent for tool execution contexts.src/main/presenter/tabPresenter.ts (1)
220-222: Verify if DevTools should open for every tab in development.The current implementation opens DevTools for every tab created in development mode. This could result in multiple DevTools windows if the user creates multiple tabs, which may be disruptive.
Consider whether:
- DevTools should only open for the first tab in each window
- This behavior is intentionally changed to always open for debugging purposes
- A configuration option should control this behavior
Also note that this change seems unrelated to the PR title "feat: better system prompt" - please confirm this is an intentional inclusion.
test/renderer/utils/maxOutputTokens.test.ts (5)
5-37: LGTM! Base cases are well-covered.The test cases correctly verify the capping behavior:
- Models exceeding the global limit are capped at 32000
- Models below the limit preserve their native maxTokens
- Boundary case (exactly 32000) is handled correctly
77-86: LGTM! Correct behavior when reasoning is not supported.The test correctly verifies that
thinkingBudgetis ignored whenreasoningSupportedis false, returning the capped model limit.
88-133: LGTM! Edge cases are thoroughly tested.The test suite covers important boundary conditions:
- Zero and undefined budgets return the full capped limit
- Negative budgets are safely handled (treated as zero)
- Budget equaling model cap correctly returns zero text tokens
- Small models with budgets calculate correctly
135-168: LGTM! Real-world scenarios provide excellent integration coverage.The test suite validates practical use cases that users will encounter:
- New conversations with various model types
- Reasoning models with thinking budgets
- Model switching scenarios
171-175: LGTM! Constant value is verified.Simple and effective test ensuring
GLOBAL_OUTPUT_TOKEN_MAXhas the expected value.src/renderer/src/utils/maxOutputTokens.ts (3)
1-1: LGTM! Reasonable global cap for output tokens.The 32000 token limit is a sensible safety cap that prevents excessive token generation while accommodating most use cases.
3-7: LGTM! Interface is well-designed.The interface clearly defines the required parameters with appropriate optionality:
- Required fields capture essential configuration
- Optional
thinkingBudgetaligns with conditional reasoning logic
31-31: LGTM! Exports are properly structured.Both the function and constant are correctly exported for public use.
src/renderer/src/components/NewThread.vue (2)
126-126: LGTM! Import statement is correct.Properly imports the new utility function and constant from the utils module.
159-160: LGTM! Initial values now use the global constant.Replacing hard-coded 4096 with
GLOBAL_OUTPUT_TOKEN_MAXimproves consistency and maintainability. The higher initial limit (32000) is more appropriate as it will be constrained by model-specific limits during configuration loading.src/renderer/src/components/chat-input/composables/usePromptInputConfig.ts (1)
16-17: LGTM! Import section is well-organized.The utils import follows the file's existing organization pattern and correctly imports the necessary function and constant.
* refactor(agent): enhance system prompt with runtime context and remove browser injection * feat(renderer): add smart default maxTokens calculation with 32k cap - Add helper function to calculate safe default maxTokens - Apply 32k global limit as safety cap - Reserve space for thinking budget when reasoning is supported - Update both Chat and NewThread modes to use smart defaults - Remove hardcoded 8192 threshold logic - Add comprehensive tests for the calculation logic * fix: colada warning
Summary by CodeRabbit
New Features
Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.