Skip to content

Commit 9ef8c3c

Browse files
mini2sbrunoberghermrubensroomote[bot]roomote
authored
Roo to main (#609)
* web: More website copy tweaks (RooCodeInc#8326) Co-authored-by: Matt Rubens <[email protected]> * fix: remove <thinking> tags from prompts for cleaner output and fewer tokens (RooCodeInc#8319) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> * Upgrade Supernova (RooCodeInc#8330) * chore: add changeset for v3.28.9 (RooCodeInc#8336) * Changeset version bump (RooCodeInc#8337) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Track when telemetry settings change (RooCodeInc#8339) * fix: use max_completion_tokens for GPT-5 models in LiteLLM provider (RooCodeInc#6980) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Make chat icons shrink-0 (RooCodeInc#8343) * web: Testimonials (RooCodeInc#8360) * Adds lots of testimonials, 5-stars from marketplace * Fits more testimonials in one page * Testimonial heading tweak * ci: refresh contrib.rocks cache workflow (RooCodeInc#8083) * feat: add Claude 4.5 Sonnet model across all providers (RooCodeInc#8368) * chore: add changeset for v3.28.10 (RooCodeInc#8369) * Changeset version bump (RooCodeInc#8370) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: correct AWS Bedrock Claude Sonnet 4.5 model identifier (RooCodeInc#8372) Fixes RooCodeInc#8371 - Updates the model ID from anthropic.claude-4.5-sonnet-v1:0 to anthropic.claude-sonnet-4-5-20250929-v1:0 to match AWS Bedrock naming convention Co-authored-by: Roo Code <[email protected]> * fix: correct Claude Sonnet 4.5 model ID format (RooCodeInc#8373) * chore: add changeset for v3.28.11 (RooCodeInc#8374) * Changeset version bump (RooCodeInc#8375) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: Anthropic Sonnet 4.5 model id + Bedrock 1M context checkbox (RooCodeInc#8384) fix(anthropic): use claude-sonnet-4-5 id fix(bedrock): enable 1M context checkbox for Sonnet 4.5 via shared list closes RooCodeInc#8379 closes RooCodeInc#8381 * chore: add changeset for v3.28.12 (RooCodeInc#8385) * Changeset version bump (RooCodeInc#8376) * changeset version bump * Revise changelog for version 3.28.12 Updated version number and consolidated patch notes. --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Fix Vertex Sonnet 4.5 (RooCodeInc#8391) * fix: remove topP parameter from Bedrock inference config (RooCodeInc#8388) Co-authored-by: Matt Rubens <[email protected]> * chore: add changeset for v3.28.13 (RooCodeInc#8393) * Changeset version bump (RooCodeInc#8394) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: add GLM-4.6 model support for z.ai provider (RooCodeInc#8408) Co-authored-by: Roo Code <[email protected]> * chore: add changeset for v3.28.14 (RooCodeInc#8413) * Changeset version bump (RooCodeInc#8414) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * A couple more sonnet 4.5 fixes (RooCodeInc#8421) * chore: Remove unsupported Gemini 2.5 Flash Image Preview free model (RooCodeInc#8359) * Include reasoning messages in cloud tasks (RooCodeInc#8401) * fix: show send button when only images are selected in chat textarea (RooCodeInc#8423) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Add structured data to the homepage (RooCodeInc#8427) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix(ui): disable send button when no input content and update tests * fix: Addresses overeager 'there are unsaved changes' dialog in settings (RooCodeInc#8410) Fixes overeager 'there are unsaved changes' dialog in settings * feat: add UsageStats schema and type (RooCodeInc#8441) feat: add UsageStats schema and type to cloud.ts Co-authored-by: Roo Code <[email protected]> * Release: v1.80.0 (RooCodeInc#8442) chore: bump version to v1.80.0 * feat: add new DeepSeek and GLM models with detailed descriptions to the Chutes provider (RooCodeInc#8467) * Deprecate free grok 4 fast (RooCodeInc#8481) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * fix: improve save button activation in prompts settings (RooCodeInc#5780) (RooCodeInc#8267) Co-authored-by: MuriloFP <[email protected]> Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * fix: properly reset cost limit tracking when user clicks "Reset and Continue" (RooCodeInc#6890) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * chore(deps): update dependency vite to v6.3.6 [security] (RooCodeInc#7838) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> * chore(deps): update dependency glob to v11.0.3 (RooCodeInc#7767) Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> * chore: add changeset for v3.28.15 (RooCodeInc#8491) * Changeset version bump (RooCodeInc#8492) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Clamp GPT-5 max output tokens to 20% of context window (RooCodeInc#8495) * fix: add ollama and lmstudio to MODELS_BY_PROVIDER (RooCodeInc#8511) Co-authored-by: Roo Code <[email protected]> * Release: v1.81.0 (RooCodeInc#8519) * Add the parent task ID in telemetry (RooCodeInc#8532) * Release: v1.82.0 (RooCodeInc#8535) * feat: Experiment: Show a bit of stats in Cloud tab to help users discover there's more in Cloud (RooCodeInc#8415) Co-authored-by: Roo Code <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: SannidhyaSah <[email protected]> Co-authored-by: John Richmond <[email protected]> * Revert "feat: Experiment: Show a bit of stats in Cloud tab to help users discover there's more in Cloud" (RooCodeInc#8559) * Identify cloud tasks in the extension bridge (RooCodeInc#8539) * Revert "Clamp GPT-5 max output tokens to 20% of context window" (RooCodeInc#8582) * feat: Add Claude Sonnet 4.5 1M context window support for Claude Code… (RooCodeInc#8586) feat: Add Claude Sonnet 4.5 1M context window support for Claude Code provider * chore: add changeset for v3.28.16 (RooCodeInc#8592) * Changeset version bump (RooCodeInc#8593) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix(i18n): Update zh-TW run command title (RooCodeInc#8631) * feat(commands, webview): Add TDD built-in command; Refactor welcome tips and test guide definition * refactor(project-wiki): separate command from subtask initialization * feat(command): update built-in commands count and names in tests * Add Claude Haiku 4.5 (RooCodeInc#8673) * Release v3.28.17 (RooCodeInc#8674) * Changeset version bump (RooCodeInc#8675) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix(editor): prevent file editing issues when git diff views are open (RooCodeInc#8676) * fix(editor): prevent file editing issues when git diff views are open Add scheme checks to ensure only file:// URIs are matched when finding editors, avoiding issues with git diffs and other schemes. Includes error logging for failed editor lookups. * Remove the warnings * fix(editor): enforce file:// scheme in editor lookups to prevent git diff issues --------- Co-authored-by: daniel-lxs <[email protected]> * web: Cloud page and updates to Pricing to explain Cloud Agent Credits (RooCodeInc#8605) * Adds mention of Cloud agents to /pricing * Credit pricing FAQ * Skeleton of a /cloud page and more pricing page tweaks * Lint * Update apps/web-roo-code/src/app/cloud/page.tsx Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Code review * Updates copy to new credit system * Moves Terms of Service to be backed by a markdown file, easier to read/edit/diff * Updated ToS * Twerm copy tweaks * Cloud screenshot and style adjustments * Style tweaks * Styles --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * feat: Add userAgent to Bedrock client for version tracking (RooCodeInc#8663) Co-authored-by: Roo Code <[email protected]> * feat: Cloud agents in extension (RooCodeInc#8470) Co-authored-by: Matt Rubens <[email protected]> * feat: Z AI: only two coding endpoints (RooCodeInc#8687) (RooCodeInc#8693) * Remove request content from UI messages (RooCodeInc#8696) * Left align the welcome title (RooCodeInc#8700) * Update image generation model selection (RooCodeInc#8698) * feat(core): enhance client ID validation and CSP configuration * web: Mobile image in /cloud (RooCodeInc#8705) * feat(ui): add option to hide API request details by default * Revert cloud agents for now (RooCodeInc#8713) * chore: add changeset for v3.28.18 (RooCodeInc#8715) * fix(task): adjust API request handling and error message assignment * Changeset version bump (RooCodeInc#8716) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * test: update telemetry client mocks and fix test id typo * Normalize docs-extractor audience tags; remove admin/stakeholder; strip tool invocations (RooCodeInc#8717) docs(extractor): normalize audience to type="user"; remove admin/stakeholder; strip tool invocation examples * Add Intercom as a subprocessor (RooCodeInc#8718) * web: Leftover white bg (RooCodeInc#8719) Leftover white bg * feat(zgsm): add supportsMaxTokens flag and adjust max token handling * docs: update Configuring Profiles video link (RooCodeInc#8189) Co-authored-by: Roo Code <[email protected]> * Fix link text for Roomote Control in README (RooCodeInc#8742) * Try a 5s status mutation timeout (RooCodeInc#8734) * web: Landing page for the reviewerFirst pass (RooCodeInc#8740) * First pass * SEO * Update apps/web-roo-code/src/app/reviewer/page.tsx Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> --------- Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Remove GPT‑5 instructions/reasoning_summary from UI message metadata to prevent ui_messages.json bloat (RooCodeInc#8756) chore(gpt5): stop persisting instructions/reasoning_summary in UI message metadata Problem: ui_messages.json was getting bloated with unused or duplicated content (system 'instructions' and 'reasoning_summary') that we do not read back. Root cause: earlier OpenAI Responses API implementation persisted these fields to per-message metadata; however, 'instructions' are already sent as top-level request instructions and 'reasoning_summary' is surfaced live via streaming events. Neither field is consumed from storage. Changes: (1) Task.persistGpt5Metadata now stores only previous_response_id; (2) removed instructions and reasoning_summary from types; (3) updated Zod schema; (4) persistence layer writes messages as-is (no sanitizer); (5) tests green. Impact: smaller ui_messages.json, no runtime behavior change for requests. Migration: old metadata fields will be ignored by schema. * Z.ai: add GLM-4.5-X, AirX, Flash (expand model coverage) (RooCodeInc#8745) * feat(zai): add GLM-4.5-X, AirX, Flash; sync with Z.ai docs; keep canonical api line keys * feat(zai): add GLM-4.5V vision model (supportsImages, pricing, 16K max output); add tests * feat(types,zai): sync Z.AI international model map and tests - Update pricing, context window, and capabilities for: glm-4.5-x, glm-4.5-airx, glm-4.5-flash, glm-4.5v, glm-4.6 - Add glm-4-32b-0414-128k - Align tests with new model specs * fix(zai): align handler generics with expanded model ids to satisfy CI compile step * chore(zai): remove tier pricing blocks for Z.ai models * fix(zai): simplify names in zaiApiLineConfigs for clarity * chore(zai): set default temperature to 0.6 --------- Co-authored-by: Roo Code <[email protected]> * Enable browser-use tool for all image-capable models (RooCodeInc#8121) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Skip failing tools tests (RooCodeInc#8767) * Update text for clarity in reviewer page (RooCodeInc#8753) * feat: add GLM-4.6-turbo model to chutes ai provider (RooCodeInc#8502) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * web: Dynamic OpenGraph images (RooCodeInc#8773) Co-authored-by: Roo Code <[email protected]> * web: Updates CTA link in /reviewer to send people to /cloud-agents/welcome (RooCodeInc#8774) * feat: add 'anthropic/claude-haiku-4.5' to prompt caching models (RooCodeInc#8764) Co-authored-by: daniel-lxs <[email protected]> * refactor(core): consolidate global custom instructions and improve shell handling * fix: update X/Twitter username from roo_code to roocode (RooCodeInc#8780) Co-authored-by: Roo Code <[email protected]> * fix(zgsm): safely pass optional language metadata to avoid runtime errors * test: update test expectations for shell handling and prompt enhancement * fix: always show checkpoint restore options regardless of change detection (RooCodeInc#8758) * feat: add token-budget based file reading with intelligent preview (RooCodeInc#8789) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Remove a very verbose error for cloud agents (RooCodeInc#8795) * fix: retry API requests on stream failures instead of aborting task (RooCodeInc#8794) * feat: improve auto-approve button responsiveness (RooCodeInc#8798) - Add responsive breakpoint at 300px for compact view - Icon correctly reflects state (X when off, ✓ when on) at all screen sizes - Show abbreviated labels on very narrow screens (< 300px) - Add triggerLabelOffShort translation key to all locales Fixes issues from PR RooCodeInc#8152: - Icon always showing checkmark on narrow screens - Breakpoint activating too early (was 400px) - Incorrect Tailwind class ordering * Add checkpoint initialization timeout settings and fix checkpoint timeout warnings (RooCodeInc#8019) Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * feat: add dynamic model loading for Roo Code Cloud provider (RooCodeInc#8728) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * refactor(tools): move imageHelpers to tools directory and update imports * Improve checkpoint menu translations for PR RooCodeInc#7841 (RooCodeInc#8796) Co-authored-by: Bruno Bergher <[email protected]> Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: NaccOll <[email protected]> * Handle Roo provider pricing correctly (RooCodeInc#8802) * fix: preserve trailing newlines in stripLineNumbers for apply_diff (RooCodeInc#8227) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Fix checkpoints test (RooCodeInc#8803) * Chore: Update magistral-medium-latest in mistral.ts (RooCodeInc#8364) Co-authored-by: daniel-lxs <[email protected]> * fix: respect nested .gitignore files in search_files (RooCodeInc#8804) * fix(export): exclude max tokens field for models that don't support it (RooCodeInc#8464) * chore: add changeset for v3.29.0 (RooCodeInc#8806) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Changeset version bump (RooCodeInc#8807) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: adjust GLM-4.6-turbo max output tokens to prevent context limit errors (RooCodeInc#8822) Co-authored-by: Roo Code <[email protected]> * fix: change Add to Context keybinding to avoid Redo conflict (RooCodeInc#8653) Co-authored-by: Roo Code <[email protected]> * feat: add Google Ads conversion tracking to reviewer page (RooCodeInc#8831) * feat: add Google Ads conversion tracking to reviewer page * fix: add asChild prop to first button to prevent invalid HTML nesting --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Fix provider model loading race conditions (RooCodeInc#8836) * Release v3.29.1 (RooCodeInc#8854) chore: add changeset for v3.29.1 * Changeset version bump (RooCodeInc#8855) * changeset version bump * Update CHANGELOG for version 3.29.1 release Updated version number and added release notes for 3.29.1. --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Merge remote-tracking branch 'upstream/main' into roo-to-main * Fix caching logic in Roo provider (RooCodeInc#8860) * fix: Remove specific Claude model version from settings descriptions (RooCodeInc#8437) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Daniel <[email protected]> * feat: add LongCat-Flash-Thinking-FP8 models to Chutes AI provider (RooCodeInc#8426) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Make sure not to show prices for free models (RooCodeInc#8864) * chore: add changeset for v3.29.2 (RooCodeInc#8865) * Changeset version bump (RooCodeInc#8866) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: resolve checkpoint menu popover overflow (RooCodeInc#8867) * fix: process queued messages after context condensing completes (RooCodeInc#8478) Co-authored-by: Roo Code <[email protected]> * fix: use max_output_tokens when available in LiteLLM fetcher (RooCodeInc#8455) Co-authored-by: Roo Code <[email protected]> * Use monotonic clock for rate limiting (RooCodeInc#8456) * Fix LiteLLM test failures after merge (RooCodeInc#8870) * Use monotonic clock for rate limiting * Fix LiteLLM test failures after merge - Remove supportsComputerUse from LiteLLM implementation as it's no longer part of ModelInfo interface - Update test expectations to include cacheWritesPrice and cacheReadsPrice fields - Fix test for max_output_tokens preference functionality --------- Co-authored-by: Christiaan Arnoldus <[email protected]> * feat: add settings to configure time and cost in system prompt (RooCodeInc#8451) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Daniel <[email protected]> * Enabled reasoning in Roo provider (RooCodeInc#8874) * feat: Add supportsReasoning property for Z.ai GLM binary thinking mode (RooCodeInc#8872) * feat: Add supportsReasoning property for Z.ai GLM binary thinking mode - Add supportsReasoning to ModelInfo schema for binary reasoning models - Update GLM-4.5 and GLM-4.6 models to use supportsReasoning: true - Implement thinking parameter support in ZAiHandler for Deep Thinking API - Update ThinkingBudget component to show simple toggle for supportsReasoning models - Add comprehensive tests for binary reasoning functionality Closes RooCodeInc#8465 * refactor: rename supportsReasoning to supportsReasoningBinary for clarity - Rename supportsReasoning -> supportsReasoningBinary in model schema - Update Z.AI GLM model configurations to use supportsReasoningBinary - Update Z.AI provider logic in createStream and completePrompt methods - Update ThinkingBudget UI component and tests - Update all test comments and expectations This change improves naming clarity by distinguishing between: - supportsReasoningBinary: Simple on/off reasoning toggle - supportsReasoningBudget: Advanced reasoning with token budget controls - supportsReasoningEffort: Advanced reasoning with effort levels * feat: update Gemini models with latest 09-2025 versions (RooCodeInc#8486) * feat: update Gemini models with latest 09-2025 versions - Add gemini-flash-latest and gemini-flash-lite-latest models - Add gemini-2.5-flash-preview-09-2025 and gemini-2.5-flash-lite-preview-09-2025 - Reorganize models list with most recent versions at the top - Maintain all existing models for backward compatibility Fixes RooCodeInc#8485 * fix: restore missing maxThinkingTokens and supportsReasoningBudget for gemini-2.5-pro-preview-03-25 Backward compatibility regression fix - these properties were accidentally removed during reorganization and are required to preserve existing reasoning-budget controls for users pinned to this model version. --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Focus textbox and add newlines after adding to context (RooCodeInc#8877) * chore: add changeset for v3.29.3 (RooCodeInc#8878) * Add how it works section to reviewer landing page (RooCodeInc#8884) * Add exponential backoff for mid-stream retry failures (RooCodeInc#8888) * Add exponential backoff for mid-stream retry failures - Extend StackItem with retryAttempt counter - Extract shared backoffAndAnnounce helper for consistent retry UX - Apply exponential backoff to mid-stream failures when auto-approval enabled - Add debug throw for testing mid-stream retry path * Add abort check in retry countdown loop Allows early exit from exponential backoff if task is cancelled during delay * Changeset version bump (RooCodeInc#8879) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: auto-sync enableReasoningEffort with reasoning dropdown selection (RooCodeInc#8890) * Prevent a noisy cloud agent exception (RooCodeInc#8577) * fix(modes): custom modes under custom path not showing (RooCodeInc#8499) * docs(vscode-lm): clarify VS Code LM API integration warning (RooCodeInc#8493) * docs(vscode-lm): clarify VS Code LM API integration blurb, provider wording, and error guidance * fix(vscode-lm): update settings UI warning; revert package description change * Restored * restored * i18n(vscode-lm): unify vscodeLmWarning text across all locales * i18n(vscode-lm): translate vscodeLmWarning across locales * fix: prevent MCP server restart when toggling tool permissions (RooCodeInc#8633) * fix: prevent MCP server restart when toggling tool permissions Add isProgrammaticUpdate flag to distinguish between programmatic config updates and user-initiated file changes. Skip file watcher processing during programmatic updates to prevent unnecessary server restarts. * fix(mcp): prevent server reconnection when toggling disabled state Fixed bug where MCP servers would reconnect instead of staying disabled when toggled off. The issue was that toggleServerDisabled() used stale in-memory config instead of reading the fresh config from disk after writing the disabled flag. Changes: Added readServerConfigFromFile() helper to read and validate server config from disk Updated disable path to read fresh config before calling connectToServer() Updated enable path to read fresh config before calling connectToServer() This ensures the disabled: true flag is properly read, causing connectToServer() to create a disabled placeholder connection instead of actually connecting the server. + refactor(mcp): use safeWriteJson for atomic config writes Replace JSON.stringify + fs.writeFile with safeWriteJson in McpHub.ts to prevent data corruption through atomic writes with file locking. * fix(mcp): prevent race condition in isProgrammaticUpdate flag Replace multiple independent reset timers with a single timer that gets cleared and rescheduled on each programmatic config update. This prevents the flag from being reset prematurely when multiple rapid updates occur, which could cause unwanted server restarts during the file watcher's debounce period. + fix(mcp): ensure isProgrammaticUpdate flag cleanup with try-finally Wrap safeWriteJson() calls in try-finally blocks to guarantee the isProgrammaticUpdate flag is always reset, even if the write operation fails. This prevents the flag from being stuck at true indefinitely, which would cause subsequent user-initiated config changes to be silently ignored. * docs(readme): update readme images and image compression * docs: replace inline base64 images with image file references * Merge remote-tracking branch 'upstream/main' into roo-to-main * feat(terminal): refactor execa command execution for shell handling * Feat: Add Minimax Provider (fixes RooCodeInc#8818) (RooCodeInc#8820) Co-authored-by: xiaose <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Release v3.29.4 (RooCodeInc#8906) * fix: Gate auth-driven Roo model refresh to active provider only (RooCodeInc#8915) * feat: add zai-glm-4.6 model to Cerebras and set gpt-oss-120b as default (RooCodeInc#8920) * feat: add zai-glm-4.6 model and update gpt-oss-120b for Cerebras - Add zai-glm-4.6 with 128K context window and 40K max tokens - Set zai-glm-4.6 as default Cerebras model - Update gpt-oss-120b to 128K context and 40K max tokens * feat: add zai-glm-4.6 model to Cerebras provider - Add zai-glm-4.6 with 128K context window and 40K max tokens - Set zai-glm-4.6 as default Cerebras model - Model provides ~2000 tokens/s for general-purpose tasks * add [SOON TO BE DEPRECATED] warning for Q3C * chore: set gpt-oss-120b as default Cerebras model * Fix cerebras test: update expected default model to gpt-oss-120b * Apply suggestion from @mrubens Co-authored-by: Matt Rubens <[email protected]> --------- Co-authored-by: kevint-cerebras <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * roo provider: update session token on every request (RooCodeInc#8923) * roo provider: update session token on every request * Cleanup: remove unused imports * Also refresh token before completePrompt() * feat: add We're hiring link to announcement modal (RooCodeInc#8931) Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Fix: Enhanced codebase index recovery and reuse ('Start Indexing' button now reuses existing Qdrant index) (RooCodeInc#8588) Co-authored-by: daniel-lxs <[email protected]> * fix: make code index initialization non-blocking at activation (RooCodeInc#8933) * fix(context): truncate type definition to match max read line (RooCodeInc#8509) Co-authored-by: daniel-lxs <[email protected]> * fix: prevent infinite loop when canceling during auto-retry (RooCodeInc#8902) * fix: prevent infinite loop when canceling during auto-retry - Add abort check after backoffAndAnnounce in first-chunk retry logic - Add abort check after backoffAndAnnounce in mid-stream retry logic - Properly handle task abortion to break retry loops Fixes RooCodeInc#8901 * docs: add critical comments explaining abort checks - Document the importance of abort checks after backoff - Explain how these checks prevent infinite loops - Add context for future maintainability --------- Co-authored-by: Roo Code <[email protected]> * feat: rename MCP Errors tab to Logs for mixed-level messages (RooCodeInc#8894) - Update McpView.tsx to use "logs" tab ID instead of "errors" - Rename translation key from tabs.errors to tabs.logs in all locales - Change empty state message from "No errors found" to "No logs yet" This better reflects that the tab shows all server messages (info, warnings, errors), not just errors. Fixes RooCodeInc#8893 Co-authored-by: Roo Code <[email protected]> * feat: improve @ file search for large projects (RooCodeInc#8805) * feat: improve @ file search for large projects - Increase default file limit from 5,000 to 10,000 (configurable up to 500,000) - Respect VSCode search settings (useIgnoreFiles, useGlobalIgnoreFiles, useParentIgnoreFiles) - Add 'maximumIndexedFilesForFileSearch' configuration setting - Add tests for new functionality Conservative default of 10k keeps memory usage low while still providing 2x improvement. Users with large projects can opt-in to higher limits (up to 500k). This is a simplified alternative to PR RooCodeInc#5723 that solves the same problem without the complexity of caching. Ripgrep is already fast enough for 10k+ files, and the benefit of caching doesn't justify 2,200+ lines of additional code and maintenance burden. Fixes RooCodeInc#5721 * fix: add missing translations for maximumIndexedFilesForFileSearch setting * test: improve file-search tests to verify configuration behavior * fix: remove search_and_replace tool from codebase (RooCodeInc#8892) Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> * Release v3.29.5 (RooCodeInc#8942) * Changeset version bump (RooCodeInc#8907) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Fix cost and token tracking between provider styles (RooCodeInc#8954) * web: Attempt at compliant, cookie-less anonymous tracking for the website (RooCodeInc#8957) * Merge remote-tracking branch 'upstream/main' into roo-to-main * fix: add keyword index for type field to fix Qdrant codebase_search error (RooCodeInc#8964) Co-authored-by: Roo Code <[email protected]> * chore: add changeset for v3.29.5 (RooCodeInc#8967) * Changeset version bump (RooCodeInc#8968) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Capture the reasoning content in base-openai-compatible for GLM 4.6 (RooCodeInc#8976) * feat: optimize router model fetching with single-provider filtering (RooCodeInc#8956) * fix: prevent message loss during queue drain race condition (RooCodeInc#8955) * fix: create new Requesty profile during OAuth (RooCodeInc#8699) Co-authored-by: John Costa <[email protected]> * feat: convert Chutes to dynamic/router provider (RooCodeInc#8980) * feat: convert Chutes to dynamic/router provider - Add chutes to dynamicProviders array in provider-settings - Add chutes entry to dynamicProviderExtras in api.ts - Create fetcher function for Chutes models API - Convert ChutesHandler to extend RouterProvider - Update tests to work with dynamic provider setup - Export chutesDefaultModelInfo for RouterProvider constructor * fix: address security and code quality issues from review - Fix potential API key leakage in error logging - Add temperature support check before setting temperature - Improve code consistency with RouterProvider patterns * fix: add chutes to routerModels initialization - Fix TypeScript error in webviewMessageHandler - Ensure chutes is included in RouterName Record type * Fixes * Support reasoning * Fix tests * Remove reasoning checkbox --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * feat: add OpenRouter embedding provider support (RooCodeInc#8973) * feat: add OpenRouter embedding provider support Implement comprehensive OpenRouter embedding provider support for codebase indexing with the following features: - New OpenRouterEmbedder class with full API compatibility - Support for OpenRouter's OpenAI-compatible embedding endpoint - Rate limiting and retry logic with exponential backoff - Base64 embedding handling to bypass OpenAI package limitations - Global rate limit state management across embedder instances - Configuration updates for API key storage and provider selection - UI integration for OpenRouter provider settings - Comprehensive test suite with mocking - Model dimension support for OpenRouter's embedding models This adds OpenRouter as the 7th supported embedding provider alongside OpenAI, Ollama, OpenAI-compatible, Gemini, Mistral, and Vercel AI Gateway. * Add translation key * Fix mutex double release bug * Add translations * Add more translations * Fix failing tests * code-index(openrouter): fix HTTP-Referer header to RooCodeInc/Roo-Code; i18n: add and wire OpenRouter Code Index strings; test: assert default headers in embedder --------- Co-authored-by: daniel-lxs <[email protected]> * feat: add GLM-4.6 model to Fireworks provider (RooCodeInc#8754) Co-authored-by: Roo Code <[email protected]> * feat: add MiniMax M2 model to Fireworks.ai provider (RooCodeInc#8962) Co-authored-by: Roo Code <[email protected]> * Union a hard-coded list of chutes models with the dynamic list (RooCodeInc#8988) * Handle <think> tags in the base OpenAI-compatible provider (RooCodeInc#8989) * Don't output newline-only reasoning (RooCodeInc#8990) Co-authored-by: Roo Code <[email protected]> * feat: implement Google Consent Mode v2 with cookieless pings (RooCodeInc#8987) * feat: implement Google Consent Mode v2 with cookieless pings - Add consent defaults before gtag.js loads (required for Consent Mode v2) - Enable cookieless pings with url_passthrough for Google Ads - Implement consent update logic for all consent categories - Support both granted and denied consent states - Maintain backward compatibility with existing consent manager * fix: remove shouldLoad from useEffect dependency array to prevent re-initialization loop --------- Co-authored-by: Roo Code <[email protected]> --------- Co-authored-by: Bruno Bergher <[email protected]> Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Daniel <[email protected]> Co-authored-by: SannidhyaSah <[email protected]> Co-authored-by: John Richmond <[email protected]> Co-authored-by: Mohammad Danaee nia <[email protected]> Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: MuriloFP <[email protected]> Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Chris Estreich <[email protected]> Co-authored-by: Colby Serpa <[email protected]> Co-authored-by: Peter Dave Hello <[email protected]> Co-authored-by: Chris Hasson <[email protected]> Co-authored-by: Christiaan Arnoldus <[email protected]> Co-authored-by: laz-001 <[email protected]> Co-authored-by: NaccOll <[email protected]> Co-authored-by: Drake Thomsen <[email protected]> Co-authored-by: Dicha Zelianivan Arkana <[email protected]> Co-authored-by: Seth Miller <[email protected]> Co-authored-by: Maosghoul <[email protected]> Co-authored-by: xiaose <[email protected]> Co-authored-by: kevint-cerebras <[email protected]> Co-authored-by: Thibault Jaigu <[email protected]> Co-authored-by: John Costa <[email protected]> Co-authored-by: David Markey <[email protected]>
1 parent e6c9c20 commit 9ef8c3c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+1727
-517
lines changed

apps/web-roo-code/src/components/providers/google-analytics-provider.tsx

Lines changed: 87 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -8,72 +8,133 @@ import { hasConsent, onConsentChange } from "@/lib/analytics/consent-manager"
88
const GTM_ID = "AW-17391954825"
99

1010
/**
11-
* Google Analytics Provider
12-
* Only loads Google Tag Manager after user gives consent
11+
* Google Analytics Provider with Consent Mode v2
12+
* Implements cookieless pings and advanced consent management
1313
*/
1414
export function GoogleAnalyticsProvider({ children }: { children: React.ReactNode }) {
1515
const [shouldLoad, setShouldLoad] = useState(false)
1616

1717
useEffect(() => {
18+
// Initialize consent defaults BEFORE loading gtag.js (required for Consent Mode v2)
19+
initializeConsentDefaults()
20+
1821
// Check initial consent status
1922
if (hasConsent()) {
2023
setShouldLoad(true)
21-
initializeGoogleAnalytics()
24+
updateConsentGranted()
2225
}
2326

2427
// Listen for consent changes
2528
const unsubscribe = onConsentChange((consented) => {
26-
if (consented && !shouldLoad) {
27-
setShouldLoad(true)
28-
initializeGoogleAnalytics()
29+
if (consented) {
30+
if (!shouldLoad) {
31+
setShouldLoad(true)
32+
}
33+
updateConsentGranted()
34+
} else {
35+
updateConsentDenied()
2936
}
3037
})
3138

3239
return unsubscribe
33-
}, [shouldLoad])
40+
// eslint-disable-next-line react-hooks/exhaustive-deps -- shouldLoad intentionally omitted to prevent re-initialization loop
41+
}, [])
3442

35-
const initializeGoogleAnalytics = () => {
36-
// Initialize the dataLayer and gtag function
43+
const initializeConsentDefaults = () => {
44+
// Set up consent defaults before gtag loads (Consent Mode v2 requirement)
3745
if (typeof window !== "undefined") {
3846
window.dataLayer = window.dataLayer || []
3947
window.gtag = function (...args: GtagArgs) {
4048
window.dataLayer.push(args)
4149
}
42-
window.gtag("js", new Date())
43-
window.gtag("config", GTM_ID)
50+
51+
// Set default consent state to 'denied' with cookieless pings enabled
52+
window.gtag("consent", "default", {
53+
ad_storage: "denied",
54+
ad_user_data: "denied",
55+
ad_personalization: "denied",
56+
analytics_storage: "denied",
57+
functionality_storage: "denied",
58+
personalization_storage: "denied",
59+
security_storage: "granted", // Always granted for security
60+
wait_for_update: 500, // Wait 500ms for consent before sending data
61+
})
62+
63+
// Enable cookieless pings for Google Ads
64+
window.gtag("set", "url_passthrough", true)
4465
}
4566
}
4667

47-
// Only render Google Analytics scripts if consent is given
48-
if (!shouldLoad) {
49-
return <>{children}</>
68+
const updateConsentGranted = () => {
69+
// User accepted cookies - update consent to granted
70+
if (typeof window !== "undefined" && window.gtag) {
71+
window.gtag("consent", "update", {
72+
ad_storage: "granted",
73+
ad_user_data: "granted",
74+
ad_personalization: "granted",
75+
analytics_storage: "granted",
76+
functionality_storage: "granted",
77+
personalization_storage: "granted",
78+
})
79+
}
5080
}
5181

82+
const updateConsentDenied = () => {
83+
// User declined cookies - keep consent denied (cookieless pings still work)
84+
if (typeof window !== "undefined" && window.gtag) {
85+
window.gtag("consent", "update", {
86+
ad_storage: "denied",
87+
ad_user_data: "denied",
88+
ad_personalization: "denied",
89+
analytics_storage: "denied",
90+
functionality_storage: "denied",
91+
personalization_storage: "denied",
92+
})
93+
}
94+
}
95+
96+
// Always render scripts (Consent Mode v2 needs gtag loaded even without consent)
97+
// Cookieless pings will work with denied consent
98+
5299
return (
53100
<>
54-
{/* Google tag (gtag.js) - Only loads after consent */}
101+
{/* Google tag (gtag.js) - Loads immediately for Consent Mode v2 */}
55102
<Script
56103
src={`https://www.googletagmanager.com/gtag/js?id=${GTM_ID}`}
57104
strategy="afterInteractive"
58105
onLoad={() => {
59-
console.log("Google Analytics loaded with consent")
106+
// Initialize gtag config after script loads
107+
if (typeof window !== "undefined" && window.gtag) {
108+
window.gtag("js", new Date())
109+
window.gtag("config", GTM_ID)
110+
}
60111
}}
61112
/>
62-
<Script id="google-analytics-init" strategy="afterInteractive">
63-
{`
64-
window.dataLayer = window.dataLayer || [];
65-
function gtag(){dataLayer.push(arguments);}
66-
gtag('js', new Date());
67-
gtag('config', '${GTM_ID}');
68-
`}
69-
</Script>
70113
{children}
71114
</>
72115
)
73116
}
74117

75-
// Type definitions for Google Analytics
76-
type GtagArgs = ["js", Date] | ["config", string, GtagConfig?] | ["event", string, GtagEventParameters?]
118+
// Type definitions for Google Analytics with Consent Mode v2
119+
type ConsentState = "granted" | "denied"
120+
121+
interface ConsentParams {
122+
ad_storage?: ConsentState
123+
ad_user_data?: ConsentState
124+
ad_personalization?: ConsentState
125+
analytics_storage?: ConsentState
126+
functionality_storage?: ConsentState
127+
personalization_storage?: ConsentState
128+
security_storage?: ConsentState
129+
wait_for_update?: number
130+
}
131+
132+
type GtagArgs =
133+
| ["js", Date]
134+
| ["config", string, GtagConfig?]
135+
| ["event", string, GtagEventParameters?]
136+
| ["consent", "default" | "update", ConsentParams]
137+
| ["set", string, unknown]
77138

78139
interface GtagConfig {
79140
[key: string]: unknown

packages/types/src/codebase-index.ts

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ export const codebaseIndexConfigSchema = z.object({
2222
codebaseIndexEnabled: z.boolean().optional(),
2323
codebaseIndexQdrantUrl: z.string().optional(),
2424
codebaseIndexEmbedderProvider: z
25-
.enum(["openai", "ollama", "openai-compatible", "gemini", "mistral", "vercel-ai-gateway"])
25+
.enum(["openai", "ollama", "openai-compatible", "gemini", "mistral", "vercel-ai-gateway", "openrouter"])
2626
.optional(),
2727
codebaseIndexEmbedderBaseUrl: z.string().optional(),
2828
codebaseIndexEmbedderModelId: z.string().optional(),
@@ -51,6 +51,7 @@ export const codebaseIndexModelsSchema = z.object({
5151
gemini: z.record(z.string(), z.object({ dimension: z.number() })).optional(),
5252
mistral: z.record(z.string(), z.object({ dimension: z.number() })).optional(),
5353
"vercel-ai-gateway": z.record(z.string(), z.object({ dimension: z.number() })).optional(),
54+
openrouter: z.record(z.string(), z.object({ dimension: z.number() })).optional(),
5455
})
5556

5657
export type CodebaseIndexModels = z.infer<typeof codebaseIndexModelsSchema>
@@ -68,6 +69,7 @@ export const codebaseIndexProviderSchema = z.object({
6869
codebaseIndexGeminiApiKey: z.string().optional(),
6970
codebaseIndexMistralApiKey: z.string().optional(),
7071
codebaseIndexVercelAiGatewayApiKey: z.string().optional(),
72+
codebaseIndexOpenRouterApiKey: z.string().optional(),
7173
})
7274

7375
export type CodebaseIndexProvider = z.infer<typeof codebaseIndexProviderSchema>

packages/types/src/global-settings.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -245,6 +245,7 @@ export const SECRET_STATE_KEYS = [
245245
"codebaseIndexGeminiApiKey",
246246
"codebaseIndexMistralApiKey",
247247
"codebaseIndexVercelAiGatewayApiKey",
248+
"codebaseIndexOpenRouterApiKey",
248249
"huggingFaceApiKey",
249250
"sambaNovaApiKey",
250251
"zaiApiKey",

packages/types/src/provider-settings.ts

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@ import {
66
anthropicModels,
77
bedrockModels,
88
cerebrasModels,
9-
chutesModels,
109
claudeCodeModels,
1110
deepSeekModels,
1211
doubaoModels,
@@ -51,6 +50,7 @@ export const dynamicProviders = [
5150
"unbound",
5251
"glama",
5352
"roo",
53+
"chutes",
5454
] as const
5555

5656
export type DynamicProvider = (typeof dynamicProviders)[number]
@@ -122,7 +122,6 @@ export const providerNames = [
122122
"anthropic",
123123
"bedrock",
124124
"cerebras",
125-
"chutes",
126125
"claude-code",
127126
"doubao",
128127
"deepseek",
@@ -689,11 +688,6 @@ export const MODELS_BY_PROVIDER: Record<
689688
label: "Cerebras",
690689
models: Object.keys(cerebrasModels),
691690
},
692-
chutes: {
693-
id: "chutes",
694-
label: "Chutes AI",
695-
models: Object.keys(chutesModels),
696-
},
697691
"claude-code": { id: "claude-code", label: "Claude Code", models: Object.keys(claudeCodeModels) },
698692
deepseek: {
699693
id: "deepseek",
@@ -771,6 +765,7 @@ export const MODELS_BY_PROVIDER: Record<
771765
unbound: { id: "unbound", label: "Unbound", models: [] },
772766
deepinfra: { id: "deepinfra", label: "DeepInfra", models: [] },
773767
"vercel-ai-gateway": { id: "vercel-ai-gateway", label: "Vercel AI Gateway", models: [] },
768+
chutes: { id: "chutes", label: "Chutes AI", models: [] },
774769

775770
// Local providers; models discovered from localhost endpoints.
776771
lmstudio: { id: "lmstudio", label: "LM Studio", models: [] },

packages/types/src/providers/chutes.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -417,3 +417,5 @@ export const chutesModels = {
417417
"Qwen3‑VL‑235B‑A22B‑Thinking is an open‑weight MoE vision‑language model (235B total, ~22B activated) optimized for deliberate multi‑step reasoning with strong text‑image‑video understanding and long‑context capabilities.",
418418
},
419419
} as const satisfies Record<string, ModelInfo>
420+
421+
export const chutesDefaultModelInfo: ModelInfo = chutesModels[chutesDefaultModelId]

packages/types/src/providers/fireworks.ts

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,15 @@ import type { ModelInfo } from "../model.js"
33
export type FireworksModelId =
44
| "accounts/fireworks/models/kimi-k2-instruct"
55
| "accounts/fireworks/models/kimi-k2-instruct-0905"
6+
| "accounts/fireworks/models/minimax-m2"
67
| "accounts/fireworks/models/qwen3-235b-a22b-instruct-2507"
78
| "accounts/fireworks/models/qwen3-coder-480b-a35b-instruct"
89
| "accounts/fireworks/models/deepseek-r1-0528"
910
| "accounts/fireworks/models/deepseek-v3"
1011
| "accounts/fireworks/models/deepseek-v3p1"
1112
| "accounts/fireworks/models/glm-4p5"
1213
| "accounts/fireworks/models/glm-4p5-air"
14+
| "accounts/fireworks/models/glm-4p6"
1315
| "accounts/fireworks/models/gpt-oss-20b"
1416
| "accounts/fireworks/models/gpt-oss-120b"
1517

@@ -37,6 +39,16 @@ export const fireworksModels = {
3739
description:
3840
"Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.",
3941
},
42+
"accounts/fireworks/models/minimax-m2": {
43+
maxTokens: 4096,
44+
contextWindow: 204800,
45+
supportsImages: false,
46+
supportsPromptCache: false,
47+
inputPrice: 0.3,
48+
outputPrice: 1.2,
49+
description:
50+
"MiniMax M2 is a high-performance language model with 204.8K context window, optimized for long-context understanding and generation tasks.",
51+
},
4052
"accounts/fireworks/models/qwen3-235b-a22b-instruct-2507": {
4153
maxTokens: 32768,
4254
contextWindow: 256000,
@@ -105,6 +117,16 @@ export const fireworksModels = {
105117
description:
106118
"Z.ai GLM-4.5-Air with 106B total parameters and 12B active parameters. Features unified reasoning, coding, and intelligent agent capabilities.",
107119
},
120+
"accounts/fireworks/models/glm-4p6": {
121+
maxTokens: 25344,
122+
contextWindow: 198000,
123+
supportsImages: false,
124+
supportsPromptCache: false,
125+
inputPrice: 0.55,
126+
outputPrice: 2.19,
127+
description:
128+
"Z.ai GLM-4.6 is an advanced coding model with exceptional performance on complex programming tasks. Features improved reasoning capabilities and enhanced code generation quality, making it ideal for software development workflows.",
129+
},
108130
"accounts/fireworks/models/gpt-oss-20b": {
109131
maxTokens: 16384,
110132
contextWindow: 128000,

0 commit comments

Comments
 (0)