diff --git a/docs/cli/byok/openai-anthropic.mdx b/docs/cli/byok/openai-anthropic.mdx index 3bfcec0..d738c77 100644 --- a/docs/cli/byok/openai-anthropic.mdx +++ b/docs/cli/byok/openai-anthropic.mdx @@ -22,8 +22,8 @@ Add to `~/.factory/settings.json`: "maxOutputTokens": 8192 }, { - "model": "gpt-5-codex", - "displayName": "GPT5-Codex [Custom]", + "model": "gpt-5.2-codex", + "displayName": "GPT-5.2-Codex [Custom]", "baseUrl": "https://api.openai.com/v1", "apiKey": "YOUR_OPENAI_KEY", "provider": "openai", diff --git a/docs/cli/byok/overview.mdx b/docs/cli/byok/overview.mdx index f372caf..7ddd5b8 100644 --- a/docs/cli/byok/overview.mdx +++ b/docs/cli/byok/overview.mdx @@ -44,7 +44,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array: | Field | Type | Required | Description | |-------|------|----------|-------------| -| `model` | `string` | ✓ | Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) | +| `model` | `string` | ✓ | Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) | | `displayName` | `string` | | Human-friendly name shown in model selector | | `baseUrl` | `string` | ✓ | API endpoint base URL | | `apiKey` | `string` | ✓ | Your API key for the provider. Can't be empty. | diff --git a/docs/cli/configuration/byok.mdx b/docs/cli/configuration/byok.mdx index 3a47242..5aacce2 100644 --- a/docs/cli/configuration/byok.mdx +++ b/docs/cli/configuration/byok.mdx @@ -43,7 +43,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array: | Field | Required | Description | |-------|----------|-------------| -| `model` | ✓ | Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) | +| `model` | ✓ | Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) | | `displayName` | | Human-friendly name shown in model selector | | `baseUrl` | ✓ | API endpoint base URL | | `apiKey` | ✓ | Your API key for the provider. Can't be empty. | @@ -104,8 +104,8 @@ Use your own API keys for cost control and billing transparency: "provider": "anthropic" }, { - "model": "gpt-5-codex", - "displayName": "GPT5-Codex [Custom]", + "model": "gpt-5.2-codex", + "displayName": "GPT-5.2-Codex [Custom]", "baseUrl": "https://api.openai.com/v1", "apiKey": "YOUR_OPENAI_KEY", "provider": "openai" diff --git a/docs/cli/configuration/custom-droids.mdx b/docs/cli/configuration/custom-droids.mdx index 4a720d5..834facf 100644 --- a/docs/cli/configuration/custom-droids.mdx +++ b/docs/cli/configuration/custom-droids.mdx @@ -220,7 +220,7 @@ Personal (~/.claude/agents/): ``` Custom Droids -> code-reviewer (gpt-5-codex) +> code-reviewer (gpt-5.2-codex) This droid verifies the correct base branch and committed... Location: Project • Tools: All tools diff --git a/docs/cli/configuration/settings.mdx b/docs/cli/configuration/settings.mdx index 93e5b97..6ffc73d 100644 --- a/docs/cli/configuration/settings.mdx +++ b/docs/cli/configuration/settings.mdx @@ -27,7 +27,7 @@ If the file doesn't exist, it's created with defaults the first time you run **d | Setting | Options | Default | Description | | ------- | ------- | ------- | ----------- | -| `model` | `sonnet`, `opus`, `GPT-5`, `gpt-5-codex`, `gpt-5-codex-max`, `haiku`, `droid-core`, `custom-model` | `opus` | The default AI model used by droid | +| `model` | `sonnet`, `opus`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.1-codex-max`, `haiku`, `gemini-3-pro`, `custom-model` | `opus` | The default AI model used by droid | | `reasoningEffort` | `off`, `none`, `low`, `medium`, `high` (availability depends on the model) | Model-dependent default | Controls how much structured thinking the model performs. | | `autonomyLevel` | `normal`, `spec`, `auto-low`, `auto-medium`, `auto-high` | `normal` | Sets the default autonomy mode when starting droid. | | `cloudSessionSync` | `true`, `false` | `true` | Mirror CLI sessions to Factory web. | @@ -56,13 +56,11 @@ Choose the default AI model that powers your droid: - **`opus`** - Claude Opus 4.5 (current default) - **`sonnet`** - Claude Sonnet 4.5, balanced cost and quality -- **`gpt-5.1`** - OpenAI GPT-5.1 -- **`gpt-5.1-codex`** - Advanced coding-focused model -- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning - **`gpt-5.2`** - OpenAI GPT-5.2 +- **`gpt-5.2-codex`** - Advanced coding-focused model +- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning - **`haiku`** - Claude Haiku 4.5, fast and cost-effective - **`gemini-3-pro`** - Gemini 3 Pro -- **`droid-core`** - GLM-4.6 open-source model - **`custom-model`** - Your own configured model via BYOK [You can also add custom models and BYOK.](/cli/configuration/byok) @@ -74,7 +72,7 @@ Choose the default AI model that powers your droid: - **`off` / `none`** – disable structured reasoning (fastest). - **`low`**, **`medium`**, **`high`** – progressively increase deliberation time for more complex reasoning. -Anthropic models default to `off`, while GPT-5 starts on `medium`. +Anthropic models default to `off`, while GPT-5.2 starts on `low`. ### Autonomy level diff --git a/docs/cli/droid-exec/overview.mdx b/docs/cli/droid-exec/overview.mdx index 48ff98e..a6de0c8 100644 --- a/docs/cli/droid-exec/overview.mdx +++ b/docs/cli/droid-exec/overview.mdx @@ -73,10 +73,9 @@ Supported models (examples): - claude-opus-4-5-20251101 (default) - claude-sonnet-4-5-20250929 - claude-haiku-4-5-20251001 -- gpt-5.1-codex -- gpt-5.1 +- gpt-5.2-codex +- gpt-5.2 - gemini-3-pro-preview -- glm-4.6 See the [model table](/pricing#pricing-table) for the full list of available models and their costs. @@ -362,7 +361,7 @@ List available tools for a model: ```bash droid exec --list-tools -droid exec --model gpt-5-codex --list-tools --output-format json +droid exec --model gpt-5.2-codex --list-tools --output-format json ``` Enable or disable specific tools: @@ -383,7 +382,7 @@ You can configure custom models to use with droid exec by adding them to your `~ { "customModels": [ { - "model": "gpt-5.1-codex-custom", + "model": "gpt-5.2-codex-custom", "displayName": "My Custom Model", "baseUrl": "https://api.openai.com/v1", "apiKey": "your-api-key-here", diff --git a/docs/guides/building/droid-exec-tutorial.mdx b/docs/guides/building/droid-exec-tutorial.mdx index 246dc57..1939fe5 100644 --- a/docs/guides/building/droid-exec-tutorial.mdx +++ b/docs/guides/building/droid-exec-tutorial.mdx @@ -78,8 +78,8 @@ The Factory example uses a simple pattern: spawn `droid exec` with `--output-for function runDroidExec(prompt: string, repoPath: string) { const args = ["exec", "--output-format", "debug"]; - // Optional: configure model (defaults to glm-4.6) - const model = process.env.DROID_MODEL_ID ?? "glm-4.6"; + // Optional: configure model (defaults to claude-opus-4-5-20251101) + const model = process.env.DROID_MODEL_ID ?? "claude-opus-4-5-20251101"; args.push("-m", model); // Optional: reasoning level (off|low|medium|high) @@ -105,13 +105,13 @@ function runDroidExec(prompt: string, repoPath: string) { - Alternative: `--output-format json` for final output only **`-m` (model)**: Choose your AI model -- `glm-4.6` - Fast, cheap (default) -- `gpt-5-codex` - Most powerful for complex code +- `claude-opus-4-5-20251101` - Default, strongest reasoning +- `gpt-5.2-codex` - Most powerful for complex code - `claude-sonnet-4-5-20250929` - Best balance of speed and capability **`-r` (reasoning)**: Control thinking depth - `off` - No reasoning, fastest -- `low` - Light reasoning (default) +- `low` - Light reasoning - `medium|high` - Deeper analysis, slower **No `--auto` flag?**: Defaults to read-only (safest) @@ -311,7 +311,7 @@ The example supports environment variables: ```bash # .env -DROID_MODEL_ID=gpt-5-codex # Default: glm-4.6 +DROID_MODEL_ID=gpt-5.2-codex # Default: claude-opus-4-5-20251101 DROID_REASONING=low # Default: low (off|low|medium|high) PORT=4000 # Default: 4000 HOST=localhost # Default: localhost @@ -376,7 +376,7 @@ fs.writeFileSync('./repos/site-content/page.md', markdown); function runWithModel(prompt: string, model: string) { return Bun.spawn([ "droid", "exec", - "-m", model, // glm-4.6, gpt-5-codex, etc. + "-m", model, // claude-opus-4-5-20251101, gpt-5.2-codex, etc. "--output-format", "debug", prompt ], { cwd: repoPath }); diff --git a/docs/guides/building/droid-vps-setup.mdx b/docs/guides/building/droid-vps-setup.mdx index d87099c..edab663 100644 --- a/docs/guides/building/droid-vps-setup.mdx +++ b/docs/guides/building/droid-vps-setup.mdx @@ -182,15 +182,15 @@ The real power of running droid on a VPS is `droid exec` - a headless mode that ### Basic droid exec usage ```bash -# Simple query with a fast model (GLM 4.6) -droid exec --model glm-4.6 "Tell me a joke" +# Simple query with a fast model (Claude Haiku 4.5) +droid exec --model claude-haiku-4-5-20251001 "Tell me a joke" ``` ### Advanced: System exploration ```bash # Ask droid to explore your system and find specific information -droid exec --model glm-4.6 "Explore my system and tell me where the file is that I'm serving with Nginx" +droid exec --model claude-haiku-4-5-20251001 "Explore my system and tell me where the file is that I'm serving with Nginx" ``` Droid will: @@ -251,7 +251,7 @@ ssh example droid # Or use droid exec for quick queries -droid exec --model glm-4-flash "Check system resources and uptime" +droid exec --model claude-haiku-4-5-20251001 "Check system resources and uptime" ``` ### Real-world scenarios diff --git a/docs/pricing.mdx b/docs/pricing.mdx index 7aacd58..410c2dc 100644 --- a/docs/pricing.mdx +++ b/docs/pricing.mdx @@ -25,17 +25,14 @@ Different models have different multipliers applied to calculate Standard Token | Model | Model ID | Multiplier | | ------------------------ | ---------------------------- | ---------- | -| Droid Core | `glm-4.6` | 0.25× | | Claude Haiku 4.5 | `claude-haiku-4-5-20251001` | 0.4× | -| GPT-5.1 | `gpt-5.1` | 0.5× | -| GPT-5.1-Codex | `gpt-5.1-codex` | 0.5× | | GPT-5.1-Codex-Max | `gpt-5.1-codex-max` | 0.5× | | GPT-5.2 | `gpt-5.2` | 0.7× | +| GPT-5.2-Codex | `gpt-5.2-codex` | 0.7× | | Gemini 3 Pro | `gemini-3-pro-preview` | 0.8× | -| Gemini 3 Flash | `gemini-3-flash-preview` | 0.2× | | Claude Sonnet 4.5 | `claude-sonnet-4-5-20250929` | 1.2× | | Claude Opus 4.5 | `claude-opus-4-5-20251101` | 2× | ## Thinking About Tokens -As a reference point, using GPT-5.1-Codex at its 0.5× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings. +As a reference point, using GPT-5.2-Codex at its 0.7× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings. diff --git a/docs/reference/cli-reference.mdx b/docs/reference/cli-reference.mdx index 516361c..2e4eaa0 100644 --- a/docs/reference/cli-reference.mdx +++ b/docs/reference/cli-reference.mdx @@ -101,14 +101,11 @@ droid exec --auto high "Run tests, commit, and push changes" | :---------------------------- | :--------------------------- | :-------------------------------- | :---------------- | | `claude-opus-4-5-20251101` | Claude Opus 4.5 (default) | Yes (Off/Low/Medium/High) | off | | `gpt-5.1-codex-max` | GPT-5.1-Codex-Max | Yes (Low/Medium/High/Extra High) | medium | -| `gpt-5.1-codex` | GPT-5.1-Codex | Yes (Low/Medium/High) | medium | -| `gpt-5.1` | GPT-5.1 | Yes (None/Low/Medium/High) | none | -| `gpt-5.2` | GPT-5.2 | Yes (Low/Medium/High) | low | +| `gpt-5.2-codex` | GPT-5.2-Codex | Yes (None/Low/Medium/High/Extra High) | medium | +| `gpt-5.2` | GPT-5.2 | Yes (Off/Low/Medium/High/Extra High) | low | | `claude-sonnet-4-5-20250929` | Claude Sonnet 4.5 | Yes (Off/Low/Medium/High) | off | | `claude-haiku-4-5-20251001` | Claude Haiku 4.5 | Yes (Off/Low/Medium/High) | off | -| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (Low/High) | high | -| `gemini-3-flash-preview` | Gemini 3 Flash | Yes (Minimal/Low/Medium/High) | high | -| `glm-4.6` | Droid Core (GLM-4.6) | None only | none | +| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (None/Low/Medium/High) | high | Custom models configured via [BYOK](/cli/configuration/byok) use the format: `custom:`