Skip to content

Commit 5aa46ae

Browse files
docs: update model references to latest versions
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
1 parent 6ded2ba commit 5aa46ae

File tree

10 files changed

+31
-40
lines changed

10 files changed

+31
-40
lines changed

docs/cli/byok/openai-anthropic.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ Add to `~/.factory/settings.json`:
2222
"maxOutputTokens": 8192
2323
},
2424
{
25-
"model": "gpt-5-codex",
26-
"displayName": "GPT5-Codex [Custom]",
25+
"model": "gpt-5.2-codex",
26+
"displayName": "GPT-5.2-Codex [Custom]",
2727
"baseUrl": "https://api.openai.com/v1",
2828
"apiKey": "YOUR_OPENAI_KEY",
2929
"provider": "openai",

docs/cli/byok/overview.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array:
4444

4545
| Field | Type | Required | Description |
4646
|-------|------|----------|-------------|
47-
| `model` | `string` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) |
47+
| `model` | `string` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) |
4848
| `displayName` | `string` | | Human-friendly name shown in model selector |
4949
| `baseUrl` | `string` || API endpoint base URL |
5050
| `apiKey` | `string` || Your API key for the provider. Can't be empty. |

docs/cli/configuration/byok.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array:
4343

4444
| Field | Required | Description |
4545
|-------|----------|-------------|
46-
| `model` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) |
46+
| `model` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) |
4747
| `displayName` | | Human-friendly name shown in model selector |
4848
| `baseUrl` || API endpoint base URL |
4949
| `apiKey` || Your API key for the provider. Can't be empty. |
@@ -104,8 +104,8 @@ Use your own API keys for cost control and billing transparency:
104104
"provider": "anthropic"
105105
},
106106
{
107-
"model": "gpt-5-codex",
108-
"displayName": "GPT5-Codex [Custom]",
107+
"model": "gpt-5.2-codex",
108+
"displayName": "GPT-5.2-Codex [Custom]",
109109
"baseUrl": "https://api.openai.com/v1",
110110
"apiKey": "YOUR_OPENAI_KEY",
111111
"provider": "openai"

docs/cli/configuration/custom-droids.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ Personal (~/.claude/agents/):
220220
```
221221
Custom Droids
222222
223-
> code-reviewer (gpt-5-codex)
223+
> code-reviewer (gpt-5.2-codex)
224224
This droid verifies the correct base branch and committed...
225225
Location: Project • Tools: All tools
226226

docs/cli/configuration/settings.mdx

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ If the file doesn't exist, it's created with defaults the first time you run **d
2727

2828
| Setting | Options | Default | Description |
2929
| ------- | ------- | ------- | ----------- |
30-
| `model` | `sonnet`, `opus`, `GPT-5`, `gpt-5-codex`, `gpt-5-codex-max`, `haiku`, `droid-core`, `custom-model` | `opus` | The default AI model used by droid |
30+
| `model` | `sonnet`, `opus`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.1-codex-max`, `haiku`, `gemini-3-pro`, `custom-model` | `opus` | The default AI model used by droid |
3131
| `reasoningEffort` | `off`, `none`, `low`, `medium`, `high` (availability depends on the model) | Model-dependent default | Controls how much structured thinking the model performs. |
3232
| `autonomyLevel` | `normal`, `spec`, `auto-low`, `auto-medium`, `auto-high` | `normal` | Sets the default autonomy mode when starting droid. |
3333
| `cloudSessionSync` | `true`, `false` | `true` | Mirror CLI sessions to Factory web. |
@@ -56,13 +56,11 @@ Choose the default AI model that powers your droid:
5656

5757
- **`opus`** - Claude Opus 4.5 (current default)
5858
- **`sonnet`** - Claude Sonnet 4.5, balanced cost and quality
59-
- **`gpt-5.1`** - OpenAI GPT-5.1
60-
- **`gpt-5.1-codex`** - Advanced coding-focused model
61-
- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
6259
- **`gpt-5.2`** - OpenAI GPT-5.2
60+
- **`gpt-5.2-codex`** - Advanced coding-focused model
61+
- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
6362
- **`haiku`** - Claude Haiku 4.5, fast and cost-effective
6463
- **`gemini-3-pro`** - Gemini 3 Pro
65-
- **`droid-core`** - GLM-4.6 open-source model
6664
- **`custom-model`** - Your own configured model via BYOK
6765

6866
[You can also add custom models and BYOK.](/cli/configuration/byok)
@@ -74,7 +72,7 @@ Choose the default AI model that powers your droid:
7472
- **`off` / `none`** – disable structured reasoning (fastest).
7573
- **`low`**, **`medium`**, **`high`** – progressively increase deliberation time for more complex reasoning.
7674

77-
Anthropic models default to `off`, while GPT-5 starts on `medium`.
75+
Anthropic models default to `off`, while GPT-5.2 starts on `low`.
7876

7977
### Autonomy level
8078

docs/cli/droid-exec/overview.mdx

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -73,10 +73,9 @@ Supported models (examples):
7373
- claude-opus-4-5-20251101 (default)
7474
- claude-sonnet-4-5-20250929
7575
- claude-haiku-4-5-20251001
76-
- gpt-5.1-codex
77-
- gpt-5.1
76+
- gpt-5.2-codex
77+
- gpt-5.2
7878
- gemini-3-pro-preview
79-
- glm-4.6
8079

8180
<Note>
8281
See the [model table](/pricing#pricing-table) for the full list of available models and their costs.
@@ -362,7 +361,7 @@ List available tools for a model:
362361
363362
```bash
364363
droid exec --list-tools
365-
droid exec --model gpt-5-codex --list-tools --output-format json
364+
droid exec --model gpt-5.2-codex --list-tools --output-format json
366365
```
367366
368367
Enable or disable specific tools:
@@ -383,7 +382,7 @@ You can configure custom models to use with droid exec by adding them to your `~
383382
{
384383
"customModels": [
385384
{
386-
"model": "gpt-5.1-codex-custom",
385+
"model": "gpt-5.2-codex-custom",
387386
"displayName": "My Custom Model",
388387
"baseUrl": "https://api.openai.com/v1",
389388
"apiKey": "your-api-key-here",

docs/guides/building/droid-exec-tutorial.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -78,8 +78,8 @@ The Factory example uses a simple pattern: spawn `droid exec` with `--output-for
7878
function runDroidExec(prompt: string, repoPath: string) {
7979
const args = ["exec", "--output-format", "debug"];
8080

81-
// Optional: configure model (defaults to glm-4.6)
82-
const model = process.env.DROID_MODEL_ID ?? "glm-4.6";
81+
// Optional: configure model (defaults to claude-opus-4-5-20251101)
82+
const model = process.env.DROID_MODEL_ID ?? "claude-opus-4-5-20251101";
8383
args.push("-m", model);
8484

8585
// Optional: reasoning level (off|low|medium|high)
@@ -105,13 +105,13 @@ function runDroidExec(prompt: string, repoPath: string) {
105105
- Alternative: `--output-format json` for final output only
106106

107107
**`-m` (model)**: Choose your AI model
108-
- `glm-4.6` - Fast, cheap (default)
109-
- `gpt-5-codex` - Most powerful for complex code
108+
- `claude-opus-4-5-20251101` - Default, strongest reasoning
109+
- `gpt-5.2-codex` - Most powerful for complex code
110110
- `claude-sonnet-4-5-20250929` - Best balance of speed and capability
111111

112112
**`-r` (reasoning)**: Control thinking depth
113113
- `off` - No reasoning, fastest
114-
- `low` - Light reasoning (default)
114+
- `low` - Light reasoning
115115
- `medium|high` - Deeper analysis, slower
116116

117117
**No `--auto` flag?**: Defaults to read-only (safest)
@@ -311,7 +311,7 @@ The example supports environment variables:
311311

312312
```bash
313313
# .env
314-
DROID_MODEL_ID=gpt-5-codex # Default: glm-4.6
314+
DROID_MODEL_ID=gpt-5.2-codex # Default: claude-opus-4-5-20251101
315315
DROID_REASONING=low # Default: low (off|low|medium|high)
316316
PORT=4000 # Default: 4000
317317
HOST=localhost # Default: localhost
@@ -376,7 +376,7 @@ fs.writeFileSync('./repos/site-content/page.md', markdown);
376376
function runWithModel(prompt: string, model: string) {
377377
return Bun.spawn([
378378
"droid", "exec",
379-
"-m", model, // glm-4.6, gpt-5-codex, etc.
379+
"-m", model, // claude-opus-4-5-20251101, gpt-5.2-codex, etc.
380380
"--output-format", "debug",
381381
prompt
382382
], { cwd: repoPath });

docs/guides/building/droid-vps-setup.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -182,15 +182,15 @@ The real power of running droid on a VPS is `droid exec` - a headless mode that
182182
### Basic droid exec usage
183183

184184
```bash
185-
# Simple query with a fast model (GLM 4.6)
186-
droid exec --model glm-4.6 "Tell me a joke"
185+
# Simple query with a fast model (Claude Haiku 4.5)
186+
droid exec --model claude-haiku-4-5-20251001 "Tell me a joke"
187187
```
188188

189189
### Advanced: System exploration
190190

191191
```bash
192192
# Ask droid to explore your system and find specific information
193-
droid exec --model glm-4.6 "Explore my system and tell me where the file is that I'm serving with Nginx"
193+
droid exec --model claude-haiku-4-5-20251001 "Explore my system and tell me where the file is that I'm serving with Nginx"
194194
```
195195

196196
Droid will:
@@ -251,7 +251,7 @@ ssh example
251251
droid
252252

253253
# Or use droid exec for quick queries
254-
droid exec --model glm-4-flash "Check system resources and uptime"
254+
droid exec --model claude-haiku-4-5-20251001 "Check system resources and uptime"
255255
```
256256

257257
### Real-world scenarios

docs/pricing.mdx

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,17 +25,14 @@ Different models have different multipliers applied to calculate Standard Token
2525

2626
| Model | Model ID | Multiplier |
2727
| ------------------------ | ---------------------------- | ---------- |
28-
| Droid Core | `glm-4.6` | 0.25× |
2928
| Claude Haiku 4.5 | `claude-haiku-4-5-20251001` | 0.4× |
30-
| GPT-5.1 | `gpt-5.1` | 0.5× |
31-
| GPT-5.1-Codex | `gpt-5.1-codex` | 0.5× |
3229
| GPT-5.1-Codex-Max | `gpt-5.1-codex-max` | 0.5× |
3330
| GPT-5.2 | `gpt-5.2` | 0.7× |
31+
| GPT-5.2-Codex | `gpt-5.2-codex` | 0.7× |
3432
| Gemini 3 Pro | `gemini-3-pro-preview` | 0.8× |
35-
| Gemini 3 Flash | `gemini-3-flash-preview` | 0.2× |
3633
| Claude Sonnet 4.5 | `claude-sonnet-4-5-20250929` | 1.2× |
3734
| Claude Opus 4.5 | `claude-opus-4-5-20251101` ||
3835

3936
## Thinking About Tokens
4037

41-
As a reference point, using GPT-5.1-Codex at its 0.5× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.
38+
As a reference point, using GPT-5.2-Codex at its 0.7× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.

docs/reference/cli-reference.mdx

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -101,14 +101,11 @@ droid exec --auto high "Run tests, commit, and push changes"
101101
| :---------------------------- | :--------------------------- | :-------------------------------- | :---------------- |
102102
| `claude-opus-4-5-20251101` | Claude Opus 4.5 (default) | Yes (Off/Low/Medium/High) | off |
103103
| `gpt-5.1-codex-max` | GPT-5.1-Codex-Max | Yes (Low/Medium/High/Extra High) | medium |
104-
| `gpt-5.1-codex` | GPT-5.1-Codex | Yes (Low/Medium/High) | medium |
105-
| `gpt-5.1` | GPT-5.1 | Yes (None/Low/Medium/High) | none |
106-
| `gpt-5.2` | GPT-5.2 | Yes (Low/Medium/High) | low |
104+
| `gpt-5.2-codex` | GPT-5.2-Codex | Yes (None/Low/Medium/High/Extra High) | medium |
105+
| `gpt-5.2` | GPT-5.2 | Yes (Off/Low/Medium/High/Extra High) | low |
107106
| `claude-sonnet-4-5-20250929` | Claude Sonnet 4.5 | Yes (Off/Low/Medium/High) | off |
108107
| `claude-haiku-4-5-20251001` | Claude Haiku 4.5 | Yes (Off/Low/Medium/High) | off |
109-
| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (Low/High) | high |
110-
| `gemini-3-flash-preview` | Gemini 3 Flash | Yes (Minimal/Low/Medium/High) | high |
111-
| `glm-4.6` | Droid Core (GLM-4.6) | None only | none |
108+
| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (None/Low/Medium/High) | high |
112109

113110
Custom models configured via [BYOK](/cli/configuration/byok) use the format: `custom:<alias>`
114111

0 commit comments

Comments
 (0)