Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/cli/byok/openai-anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ Add to `~/.factory/settings.json`:
"maxOutputTokens": 8192
},
{
"model": "gpt-5-codex",
"displayName": "GPT5-Codex [Custom]",
"model": "gpt-5.2-codex",
"displayName": "GPT-5.2-Codex [Custom]",
"baseUrl": "https://api.openai.com/v1",
"apiKey": "YOUR_OPENAI_KEY",
"provider": "openai",
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/byok/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array:

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | `string` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) |
| `model` | `string` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) |
| `displayName` | `string` | | Human-friendly name shown in model selector |
| `baseUrl` | `string` || API endpoint base URL |
| `apiKey` | `string` || Your API key for the provider. Can't be empty. |
Expand Down
6 changes: 3 additions & 3 deletions docs/cli/configuration/byok.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Add custom models to `~/.factory/settings.json` under the `customModels` array:

| Field | Required | Description |
|-------|----------|-------------|
| `model` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5-codex`, `qwen3:4b`) |
| `model` || Model identifier sent via API (e.g., `claude-sonnet-4-5-20250929`, `gpt-5.2-codex`, `qwen3:4b`) |
| `displayName` | | Human-friendly name shown in model selector |
| `baseUrl` || API endpoint base URL |
| `apiKey` || Your API key for the provider. Can't be empty. |
Expand Down Expand Up @@ -104,8 +104,8 @@ Use your own API keys for cost control and billing transparency:
"provider": "anthropic"
},
{
"model": "gpt-5-codex",
"displayName": "GPT5-Codex [Custom]",
"model": "gpt-5.2-codex",
"displayName": "GPT-5.2-Codex [Custom]",
"baseUrl": "https://api.openai.com/v1",
"apiKey": "YOUR_OPENAI_KEY",
"provider": "openai"
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/configuration/custom-droids.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ Personal (~/.claude/agents/):
```
Custom Droids

> code-reviewer (gpt-5-codex)
> code-reviewer (gpt-5.2-codex)
This droid verifies the correct base branch and committed...
Location: Project • Tools: All tools

Expand Down
10 changes: 4 additions & 6 deletions docs/cli/configuration/settings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ If the file doesn't exist, it's created with defaults the first time you run **d

| Setting | Options | Default | Description |
| ------- | ------- | ------- | ----------- |
| `model` | `sonnet`, `opus`, `GPT-5`, `gpt-5-codex`, `gpt-5-codex-max`, `haiku`, `droid-core`, `custom-model` | `opus` | The default AI model used by droid |
| `model` | `sonnet`, `opus`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.1-codex-max`, `haiku`, `gemini-3-pro`, `custom-model` | `opus` | The default AI model used by droid |
| `reasoningEffort` | `off`, `none`, `low`, `medium`, `high` (availability depends on the model) | Model-dependent default | Controls how much structured thinking the model performs. |
| `autonomyLevel` | `normal`, `spec`, `auto-low`, `auto-medium`, `auto-high` | `normal` | Sets the default autonomy mode when starting droid. |
| `cloudSessionSync` | `true`, `false` | `true` | Mirror CLI sessions to Factory web. |
Expand Down Expand Up @@ -56,13 +56,11 @@ Choose the default AI model that powers your droid:

- **`opus`** - Claude Opus 4.5 (current default)
- **`sonnet`** - Claude Sonnet 4.5, balanced cost and quality
- **`gpt-5.1`** - OpenAI GPT-5.1
- **`gpt-5.1-codex`** - Advanced coding-focused model
- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
- **`gpt-5.2`** - OpenAI GPT-5.2
- **`gpt-5.2-codex`** - Advanced coding-focused model
- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
- **`haiku`** - Claude Haiku 4.5, fast and cost-effective
- **`gemini-3-pro`** - Gemini 3 Pro
- **`droid-core`** - GLM-4.6 open-source model
- **`custom-model`** - Your own configured model via BYOK

[You can also add custom models and BYOK.](/cli/configuration/byok)
Expand All @@ -74,7 +72,7 @@ Choose the default AI model that powers your droid:
- **`off` / `none`** – disable structured reasoning (fastest).
- **`low`**, **`medium`**, **`high`** – progressively increase deliberation time for more complex reasoning.

Anthropic models default to `off`, while GPT-5 starts on `medium`.
Anthropic models default to `off`, while GPT-5.2 starts on `low`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] Don’t claim GPT-5.2 defaults to low reasoning without verifying

This changes the stated default from medium to low; if the actual CLI/model default is still medium (or model-dependent), this is user-facing incorrect behavior guidance that will lead to confusing mismatches when people follow the docs.


### Autonomy level

Expand Down
9 changes: 4 additions & 5 deletions docs/cli/droid-exec/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,9 @@ Supported models (examples):
- claude-opus-4-5-20251101 (default)
- claude-sonnet-4-5-20250929
- claude-haiku-4-5-20251001
- gpt-5.1-codex
- gpt-5.1
- gpt-5.2-codex
- gpt-5.2
- gemini-3-pro-preview
- glm-4.6

<Note>
See the [model table](/pricing#pricing-table) for the full list of available models and their costs.
Expand Down Expand Up @@ -362,7 +361,7 @@ List available tools for a model:

```bash
droid exec --list-tools
droid exec --model gpt-5-codex --list-tools --output-format json
droid exec --model gpt-5.2-codex --list-tools --output-format json
```

Enable or disable specific tools:
Expand All @@ -383,7 +382,7 @@ You can configure custom models to use with droid exec by adding them to your `~
{
"customModels": [
{
"model": "gpt-5.1-codex-custom",
"model": "gpt-5.2-codex-custom",
"displayName": "My Custom Model",
"baseUrl": "https://api.openai.com/v1",
"apiKey": "your-api-key-here",
Expand Down
14 changes: 7 additions & 7 deletions docs/guides/building/droid-exec-tutorial.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ The Factory example uses a simple pattern: spawn `droid exec` with `--output-for
function runDroidExec(prompt: string, repoPath: string) {
const args = ["exec", "--output-format", "debug"];

// Optional: configure model (defaults to glm-4.6)
const model = process.env.DROID_MODEL_ID ?? "glm-4.6";
// Optional: configure model (defaults to claude-opus-4-5-20251101)
const model = process.env.DROID_MODEL_ID ?? "claude-opus-4-5-20251101";
args.push("-m", model);

// Optional: reasoning level (off|low|medium|high)
Expand All @@ -105,13 +105,13 @@ function runDroidExec(prompt: string, repoPath: string) {
- Alternative: `--output-format json` for final output only

**`-m` (model)**: Choose your AI model
- `glm-4.6` - Fast, cheap (default)
- `gpt-5-codex` - Most powerful for complex code
- `claude-opus-4-5-20251101` - Default, strongest reasoning
- `gpt-5.2-codex` - Most powerful for complex code
- `claude-sonnet-4-5-20250929` - Best balance of speed and capability

**`-r` (reasoning)**: Control thinking depth
- `off` - No reasoning, fastest
- `low` - Light reasoning (default)
- `low` - Light reasoning
- `medium|high` - Deeper analysis, slower

**No `--auto` flag?**: Defaults to read-only (safest)
Expand Down Expand Up @@ -311,7 +311,7 @@ The example supports environment variables:

```bash
# .env
DROID_MODEL_ID=gpt-5-codex # Default: glm-4.6
DROID_MODEL_ID=gpt-5.2-codex # Default: claude-opus-4-5-20251101
DROID_REASONING=low # Default: low (off|low|medium|high)
PORT=4000 # Default: 4000
HOST=localhost # Default: localhost
Expand Down Expand Up @@ -376,7 +376,7 @@ fs.writeFileSync('./repos/site-content/page.md', markdown);
function runWithModel(prompt: string, model: string) {
return Bun.spawn([
"droid", "exec",
"-m", model, // glm-4.6, gpt-5-codex, etc.
"-m", model, // claude-opus-4-5-20251101, gpt-5.2-codex, etc.
"--output-format", "debug",
prompt
], { cwd: repoPath });
Expand Down
8 changes: 4 additions & 4 deletions docs/guides/building/droid-vps-setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -182,15 +182,15 @@ The real power of running droid on a VPS is `droid exec` - a headless mode that
### Basic droid exec usage

```bash
# Simple query with a fast model (GLM 4.6)
droid exec --model glm-4.6 "Tell me a joke"
# Simple query with a fast model (Claude Haiku 4.5)
droid exec --model claude-haiku-4-5-20251001 "Tell me a joke"
```

### Advanced: System exploration

```bash
# Ask droid to explore your system and find specific information
droid exec --model glm-4.6 "Explore my system and tell me where the file is that I'm serving with Nginx"
droid exec --model claude-haiku-4-5-20251001 "Explore my system and tell me where the file is that I'm serving with Nginx"
```

Droid will:
Expand Down Expand Up @@ -251,7 +251,7 @@ ssh example
droid

# Or use droid exec for quick queries
droid exec --model glm-4-flash "Check system resources and uptime"
droid exec --model claude-haiku-4-5-20251001 "Check system resources and uptime"
```

### Real-world scenarios
Expand Down
7 changes: 2 additions & 5 deletions docs/pricing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,17 +25,14 @@ Different models have different multipliers applied to calculate Standard Token

| Model | Model ID | Multiplier |
| ------------------------ | ---------------------------- | ---------- |
| Droid Core | `glm-4.6` | 0.25× |
| Claude Haiku 4.5 | `claude-haiku-4-5-20251001` | 0.4× |
| GPT-5.1 | `gpt-5.1` | 0.5× |
| GPT-5.1-Codex | `gpt-5.1-codex` | 0.5× |
| GPT-5.1-Codex-Max | `gpt-5.1-codex-max` | 0.5× |
| GPT-5.2 | `gpt-5.2` | 0.7× |
| GPT-5.2-Codex | `gpt-5.2-codex` | 0.7× |
| Gemini 3 Pro | `gemini-3-pro-preview` | 0.8× |
| Gemini 3 Flash | `gemini-3-flash-preview` | 0.2× |
| Claude Sonnet 4.5 | `claude-sonnet-4-5-20250929` | 1.2× |
| Claude Opus 4.5 | `claude-opus-4-5-20251101` | 2× |

## Thinking About Tokens

As a reference point, using GPT-5.1-Codex at its 0.5× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.
As a reference point, using GPT-5.2-Codex at its 0.7× multiplier alongside our typical cache ratio of 4–8× means your effective Standard Token usage goes dramatically further than raw on-demand calls. Switching to very expensive models frequently—or rotating models often enough to invalidate the cache—will lower that benefit, but most workloads see materially higher usage ceilings compared with buying capacity directly from individual model providers. Our aim is for you to run your workloads without worrying about token math; the plans are designed so common usage patterns outperform comparable direct offerings.
9 changes: 3 additions & 6 deletions docs/reference/cli-reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,14 +101,11 @@ droid exec --auto high "Run tests, commit, and push changes"
| :---------------------------- | :--------------------------- | :-------------------------------- | :---------------- |
| `claude-opus-4-5-20251101` | Claude Opus 4.5 (default) | Yes (Off/Low/Medium/High) | off |
| `gpt-5.1-codex-max` | GPT-5.1-Codex-Max | Yes (Low/Medium/High/Extra High) | medium |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] Don’t list unsupported reasoning level None for gpt-5.2-codex

This table says gpt-5.2-codex supports None, but elsewhere in these docs you describe reasoning levels as off|low|medium|high; if None isn’t actually accepted by droid exec, this will cause immediate CLI errors for users copying the command/setting.

| `gpt-5.1-codex` | GPT-5.1-Codex | Yes (Low/Medium/High) | medium |
| `gpt-5.1` | GPT-5.1 | Yes (None/Low/Medium/High) | none |
| `gpt-5.2` | GPT-5.2 | Yes (Low/Medium/High) | low |
| `gpt-5.2-codex` | GPT-5.2-Codex | Yes (None/Low/Medium/High/Extra High) | medium |
| `gpt-5.2` | GPT-5.2 | Yes (Off/Low/Medium/High/Extra High) | low |
| `claude-sonnet-4-5-20250929` | Claude Sonnet 4.5 | Yes (Off/Low/Medium/High) | off |
| `claude-haiku-4-5-20251001` | Claude Haiku 4.5 | Yes (Off/Low/Medium/High) | off |
| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (Low/High) | high |
| `gemini-3-flash-preview` | Gemini 3 Flash | Yes (Minimal/Low/Medium/High) | high |
| `glm-4.6` | Droid Core (GLM-4.6) | None only | none |
| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (None/Low/Medium/High) | high |

Custom models configured via [BYOK](/cli/configuration/byok) use the format: `custom:<alias>`

Expand Down
Loading