You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cli/configuration/settings.mdx
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ If the file doesn't exist, it's created with defaults the first time you run **d
27
27
28
28
| Setting | Options | Default | Description |
29
29
| ------- | ------- | ------- | ----------- |
30
-
|`model`|`sonnet`, `opus`, `GPT-5`, `gpt-5-codex`, `gpt-5-codex-max`, `haiku`, `droid-core`, `custom-model`|`opus`| The default AI model used by droid |
30
+
|`model`|`sonnet`, `opus`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.1-codex-max`, `haiku`, `gemini-3-pro`, `custom-model`|`opus`| The default AI model used by droid |
31
31
|`reasoningEffort`|`off`, `none`, `low`, `medium`, `high` (availability depends on the model) | Model-dependent default | Controls how much structured thinking the model performs. |
32
32
|`autonomyLevel`|`normal`, `spec`, `auto-low`, `auto-medium`, `auto-high`|`normal`| Sets the default autonomy mode when starting droid. |
33
33
|`cloudSessionSync`|`true`, `false`|`true`| Mirror CLI sessions to Factory web. |
@@ -56,13 +56,11 @@ Choose the default AI model that powers your droid:
56
56
57
57
-**`opus`** - Claude Opus 4.5 (current default)
58
58
-**`sonnet`** - Claude Sonnet 4.5, balanced cost and quality
59
-
-**`gpt-5.1`** - OpenAI GPT-5.1
60
-
-**`gpt-5.1-codex`** - Advanced coding-focused model
61
-
-**`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
62
59
-**`gpt-5.2`** - OpenAI GPT-5.2
60
+
-**`gpt-5.2-codex`** - Advanced coding-focused model
61
+
-**`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
63
62
-**`haiku`** - Claude Haiku 4.5, fast and cost-effective
64
63
-**`gemini-3-pro`** - Gemini 3 Pro
65
-
-**`droid-core`** - GLM-4.6 open-source model
66
64
-**`custom-model`** - Your own configured model via BYOK
67
65
68
66
[You can also add custom models and BYOK.](/cli/configuration/byok)
@@ -74,7 +72,7 @@ Choose the default AI model that powers your droid:
| Claude Sonnet 4.5 |`claude-sonnet-4-5-20250929`| 1.2× |
37
34
| Claude Opus 4.5 |`claude-opus-4-5-20251101`| 2× |
38
35
39
36
## Thinking About Tokens
40
37
41
-
As a reference point, using GPT-5.1-Codex at its 0.5× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.
38
+
As a reference point, using GPT-5.2-Codex at its 0.7× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.
0 commit comments