Skip to content

Commit 03bcdaa

Browse files
docs: add GPT-5.3-Codex and MiniMax M2.5 to model documentation (#657)
Add two new models across all documentation tiers: - GPT-5.3-Codex (gpt-5.3-codex): 0.7x multiplier, Extra High reasoning, verbosity support - Droid Core (MiniMax M2.5) (minimax-m2.5): 0.12x multiplier, Low/Medium/High reasoning Updated files: pricing.mdx, cli-reference.mdx, choosing-your-model.mdx, settings.mdx, droid-exec/overview.mdx, token-efficiency.mdx Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
1 parent 60c3b10 commit 03bcdaa

File tree

6 files changed

+32
-18
lines changed

6 files changed

+32
-18
lines changed

docs/cli/configuration/settings.mdx

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ If the file doesn't exist, it's created with defaults the first time you run **d
2727

2828
| Setting | Options | Default | Description |
2929
| ------- | ------- | ------- | ----------- |
30-
| `model` | `opus`, `opus-4-6`, `opus-4-6-fast`, `sonnet`, `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.2`, `gpt-5.2-codex`, `haiku`, `gemini-3-pro`, `droid-core`, `kimi-k2.5`, `custom-model` | `opus` | The default AI model used by droid |
30+
| `model` | `opus`, `opus-4-6`, `opus-4-6-fast`, `sonnet`, `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.3-codex`, `haiku`, `gemini-3-pro`, `droid-core`, `kimi-k2.5`, `minimax-m2.5`, `custom-model` | `opus` | The default AI model used by droid |
3131
| `reasoningEffort` | `off`, `none`, `low`, `medium`, `high` (availability depends on the model) | Model-dependent default | Controls how much structured thinking the model performs. |
3232
| `autonomyLevel` | `normal`, `spec`, `auto-low`, `auto-medium`, `auto-high` | `normal` | Sets the default autonomy mode when starting droid. |
3333
| `cloudSessionSync` | `true`, `false` | `true` | Mirror CLI sessions to Factory web. |
@@ -62,11 +62,13 @@ Choose the default AI model that powers your droid:
6262
- **`gpt-5.1-codex`** - Advanced coding-focused model
6363
- **`gpt-5.1-codex-max`** - GPT-5.1-Codex-Max, supports Extra High reasoning
6464
- **`gpt-5.2`** - OpenAI GPT-5.2
65-
- **`gpt-5.2-codex`** - GPT-5.2-Codex, latest OpenAI coding model with Extra High reasoning
65+
- **`gpt-5.2-codex`** - GPT-5.2-Codex, OpenAI coding model with Extra High reasoning
66+
- **`gpt-5.3-codex`** - GPT-5.3-Codex, latest OpenAI coding model with Extra High reasoning and verbosity support
6667
- **`haiku`** - Claude Haiku 4.5, fast and cost-effective
6768
- **`gemini-3-pro`** - Gemini 3 Pro
6869
- **`droid-core`** - GLM-4.7 open-source model
6970
- **`kimi-k2.5`** - Kimi K2.5 open-source model with image support
71+
- **`minimax-m2.5`** - MiniMax M2.5 open-source model with reasoning support (0.12× multiplier)
7072
- **`custom-model`** - Your own configured model via BYOK
7173

7274
[You can also add custom models and BYOK.](/cli/configuration/byok)

docs/cli/droid-exec/overview.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,10 +80,12 @@ Supported models (examples):
8080
- gpt-5.1
8181
- gpt-5.2
8282
- gpt-5.2-codex
83+
- gpt-5.3-codex
8384
- gemini-3-pro-preview
8485
- gemini-3-flash-preview
8586
- glm-4.7
8687
- kimi-k2.5
88+
- minimax-m2.5
8789

8890
<Note>
8991
See the [model table](/pricing#pricing-table) for the full list of available models and their costs.

docs/cli/user-guides/choosing-your-model.mdx

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Balance accuracy, speed, and cost by picking the right model and re
44
keywords: ['model', 'models', 'llm', 'claude', 'sonnet', 'opus', 'haiku', 'gpt', 'openai', 'anthropic', 'choose model', 'switch model']
55
---
66

7-
Model quality evolves quickly, and we tune the CLI defaults as the ecosystem shifts. Use this guide as a snapshot of how the major options compare today, and expect to revisit it as we publish updates. This guide was last updated on Thursday, February 12th 2026.
7+
Model quality evolves quickly, and we tune the CLI defaults as the ecosystem shifts. Use this guide as a snapshot of how the major options compare today, and expect to revisit it as we publish updates. This guide was last updated on Friday, February 14th 2026.
88

99
---
1010

@@ -17,15 +17,17 @@ Model quality evolves quickly, and we tune the CLI defaults as the ecosystem shi
1717
| 3 | **Claude Opus 4.5** | Proven quality-and-safety balance; strong default for TUI and exec. |
1818
| 4 | **GPT-5.1-Codex-Max** | Fast coding loops with support up to **Extra High** reasoning; great for heavy implementation and debugging. |
1919
| 5 | **Claude Sonnet 4.5** | Strong daily driver with balanced cost/quality; great general-purpose choice when you don’t need Opus-level depth. |
20-
| 6 | **GPT-5.2-Codex** | Latest OpenAI coding model with **Extra High** reasoning; strong for implementation-heavy tasks. |
21-
| 7 | **GPT-5.1-Codex** | Quick iteration with solid code quality at lower cost; bump reasoning when you need more depth. |
22-
| 8 | **GPT-5.1** | Good generalist, especially when you want OpenAI ergonomics with flexible reasoning effort. |
23-
| 9 | **GPT-5.2** | Advanced OpenAI model with verbosity support and reasoning up to **Extra High**. |
24-
| 10 | **Claude Haiku 4.5** | Fast, cost-efficient for routine tasks and high-volume automation. |
25-
| 11 | **Gemini 3 Pro** | Strong at mixed reasoning with Low/High settings; helpful for researchy flows with structured outputs. |
26-
| 12 | **Gemini 3 Flash** | Fast, cheap (0.2× multiplier) with full reasoning support; great for high-volume tasks where speed matters. |
27-
| 13 | **Droid Core (GLM-4.7)** | Open-source, 0.25× multiplier, great for bulk automation or air-gapped environments; note: no image support. |
28-
| 14 | **Droid Core (Kimi K2.5)** | Open-source, 0.25× multiplier with image support; good for cost-sensitive work. |
20+
| 6 | **GPT-5.3-Codex** | Newest OpenAI coding model with **Extra High** reasoning and verbosity support; strong for implementation-heavy tasks. |
21+
| 7 | **GPT-5.2-Codex** | Proven OpenAI coding model with **Extra High** reasoning; solid for implementation-heavy tasks. |
22+
| 8 | **GPT-5.1-Codex** | Quick iteration with solid code quality at lower cost; bump reasoning when you need more depth. |
23+
| 9 | **GPT-5.1** | Good generalist, especially when you want OpenAI ergonomics with flexible reasoning effort. |
24+
| 10 | **GPT-5.2** | Advanced OpenAI model with verbosity support and reasoning up to **Extra High**. |
25+
| 11 | **Claude Haiku 4.5** | Fast, cost-efficient for routine tasks and high-volume automation. |
26+
| 12 | **Gemini 3 Pro** | Strong at mixed reasoning with Low/High settings; helpful for researchy flows with structured outputs. |
27+
| 13 | **Gemini 3 Flash** | Fast, cheap (0.2× multiplier) with full reasoning support; great for high-volume tasks where speed matters. |
28+
| 14 | **Droid Core (MiniMax M2.5)** | Open-source, 0.12× multiplier with reasoning support (Low/Medium/High); cheapest model available. No image support. |
29+
| 15 | **Droid Core (GLM-4.7)** | Open-source, 0.25× multiplier, great for bulk automation or air-gapped environments; note: no image support. |
30+
| 16 | **Droid Core (Kimi K2.5)** | Open-source, 0.25× multiplier with image support; good for cost-sensitive work. |
2931

3032
<Note>
3133
We ship model updates regularly. When a new release overtakes the list above,
@@ -39,10 +41,10 @@ Model quality evolves quickly, and we tune the CLI defaults as the ecosystem shi
3941
| Scenario | Recommended model |
4042
| ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
4143
| **Deep planning, architecture reviews, ambiguous product specs** | Start with **Opus 4.6** for best depth and safety, or **Opus 4.6 Fast** for faster turnaround. Use **Sonnet 4.5** when you want balanced cost/quality, or **Codex/Codex-Max** for faster iteration with reasoning. |
42-
| **Full-feature development, large refactors** | **Opus 4.6** or **Opus 4.5** for depth and safety. **GPT-5.2-Codex** or **GPT-5.1-Codex-Max** when you need speed plus **Extra High** reasoning; **Sonnet 4.5** for balanced loops. |
43-
| **Repeatable edits, summarization, boilerplate generation** | **Haiku 4.5** or **Droid Core** for speed and cost. **GPT-5.1 / GPT-5.1-Codex** when you need higher quality or structured outputs. |
44+
| **Full-feature development, large refactors** | **Opus 4.6** or **Opus 4.5** for depth and safety. **GPT-5.3-Codex**, **GPT-5.2-Codex**, or **GPT-5.1-Codex-Max** when you need speed plus **Extra High** reasoning; **Sonnet 4.5** for balanced loops. |
45+
| **Repeatable edits, summarization, boilerplate generation** | **Haiku 4.5** or **Droid Core** (including **MiniMax M2.5** at 0.12×) for speed and cost. **GPT-5.1 / GPT-5.1-Codex** when you need higher quality or structured outputs. |
4446
| **CI/CD or automation loops** | Favor **Haiku 4.5** or **Droid Core** for predictable, low-cost throughput. Use **Codex** or **Codex-Max** when automation needs stronger reasoning. |
45-
| **High-volume automation, frequent quick turns** | **Haiku 4.5** for speedy feedback. **Droid Core** when cost is critical or you need air-gapped deployment. |
47+
| **High-volume automation, frequent quick turns** | **Haiku 4.5** for speedy feedback. **Droid Core** (especially **MiniMax M2.5** at 0.12× with reasoning) when cost is critical or you need air-gapped deployment. |
4648

4749
<Tip>
4850
**Claude Opus 4.6** is the top-tier option for extremely complex architecture decisions or critical work where you need maximum reasoning capability. **Opus 4.6 Fast** is tuned for faster responses at a higher cost. Most tasks don't require Opus-level power—start with Sonnet 4.5 and escalate only if needed.
@@ -70,12 +72,14 @@ Tip: you can swap models mid-session with `/model` or by toggling in the setting
7072
- **GPT-5.1-Codex-Max**: Low / Medium / High / **Extra High** (default: Medium)
7173
- **GPT-5.2**: Off / Low / Medium / High / **Extra High** (default: Low)
7274
- **GPT-5.2-Codex**: None / Low / Medium / High / **Extra High** (default: Medium)
75+
- **GPT-5.3-Codex**: None / Low / Medium / High / **Extra High** (default: Medium)
7376
- **Gemini 3 Pro**: None / Low / Medium / High (default: High)
7477
- **Gemini 3 Flash**: Minimal / Low / Medium / High (default: High)
7578
- **Droid Core (GLM-4.7)**: None only (default: None; no image support)
7679
- **Droid Core (Kimi K2.5)**: None only (default: None)
80+
- **Droid Core (MiniMax M2.5)**: Low / Medium / High (default: High)
7781

78-
Reasoning effort increases latency and cost—start low for simple work and escalate as needed. **Max** is available on Claude Opus 4.6. **Extra High** is available on GPT-5.1-Codex-Max, GPT-5.2, and GPT-5.2-Codex.
82+
Reasoning effort increases latency and cost—start low for simple work and escalate as needed. **Max** is available on Claude Opus 4.6. **Extra High** is available on GPT-5.1-Codex-Max, GPT-5.2, GPT-5.2-Codex, and GPT-5.3-Codex.
7983

8084
<Tip>
8185
Change reasoning effort from `/model`**Reasoning effort**, or via the
@@ -90,14 +94,14 @@ Factory ships with managed Anthropic and OpenAI access. If you prefer to run aga
9094

9195
### Open-source models
9296

93-
**Droid Core (GLM-4.7)** and **Droid Core (Kimi K2.5)** are open-source alternatives available in the CLI. They're useful for:
97+
**Droid Core (GLM-4.7)**, **Droid Core (Kimi K2.5)**, and **Droid Core (MiniMax M2.5)** are open-source alternatives available in the CLI. They're useful for:
9498

9599
- **Air-gapped environments** where external API calls aren't allowed
96100
- **Cost-sensitive projects** needing unlimited local inference
97101
- **Privacy requirements** where code cannot leave your infrastructure
98102
- **Experimentation** with open-source model capabilities
99103

100-
**Note:** GLM-4.7 does not support image attachments. Kimi K2.5 does support images. For image-based workflows, use Claude, GPT, or Kimi models.
104+
**Note:** GLM-4.7 and MiniMax M2.5 do not support image attachments. Kimi K2.5 does support images. MiniMax M2.5 is the cheapest model available (0.12× multiplier) and uniquely supports reasoning (Low/Medium/High) among Droid Core models. For image-based workflows, use Claude, GPT, or Kimi models.
101105

102106
To use open-source models, you'll need to configure them via BYOK with a local inference server (like Ollama) or a hosted provider. See [BYOK documentation](/cli/configuration/byok) for setup instructions.
103107

docs/guides/power-user/token-efficiency.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,11 +134,13 @@ Different models have different cost multipliers and capabilities. Match the mod
134134

135135
| Model | Multiplier | Best For |
136136
|-------|------------|----------|
137+
| Droid Core (MiniMax M2.5) | 0.12× | Cheapest option with reasoning support |
137138
| Gemini 3 Flash | 0.2× | Fast, cheap for high-volume tasks |
138139
| Droid Core (GLM-4.7) | 0.25× | Bulk automation, simple tasks |
139140
| Droid Core (Kimi K2.5) | 0.25× | Cost-sensitive work, supports images |
140141
| Claude Haiku 4.5 | 0.4× | Quick edits, routine work |
141142
| GPT-5.1 / GPT-5.1-Codex | 0.5× | Implementation, debugging |
143+
| GPT-5.2-Codex / GPT-5.3-Codex | 0.7× | Advanced coding with Extra High reasoning |
142144
| Gemini 3 Pro | 0.8× | Research, analysis |
143145
| Claude Sonnet 4.5 | 1.2× | Balanced quality/cost |
144146
| Claude Opus 4.5 || Complex reasoning, architecture |

docs/pricing.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ Different models have different multipliers applied to calculate Standard Token
2424

2525
| Model | Model ID | Multiplier |
2626
| ------------------------ | ---------------------------- | ---------- |
27+
| Droid Core (MiniMax M2.5)| `minimax-m2.5` | 0.12× |
2728
| Gemini 3 Flash | `gemini-3-flash-preview` | 0.2× |
2829
| Droid Core (GLM-4.7) | `glm-4.7` | 0.25× |
2930
| Droid Core (Kimi K2.5) | `kimi-k2.5` | 0.25× |
@@ -33,6 +34,7 @@ Different models have different multipliers applied to calculate Standard Token
3334
| GPT-5.1-Codex-Max | `gpt-5.1-codex-max` | 0.5× |
3435
| GPT-5.2 | `gpt-5.2` | 0.7× |
3536
| GPT-5.2-Codex | `gpt-5.2-codex` | 0.7× |
37+
| GPT-5.3-Codex | `gpt-5.3-codex` | 0.7× |
3638
| Gemini 3 Pro | `gemini-3-pro-preview` | 0.8× |
3739
| Claude Sonnet 4.5 | `claude-sonnet-4-5-20250929` | 1.2× |
3840
| Claude Opus 4.5 | `claude-opus-4-5-20251101` ||

docs/reference/cli-reference.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,12 +108,14 @@ droid exec --auto high "Run tests, commit, and push changes"
108108
| `gpt-5.1` | GPT-5.1 | Yes (None/Low/Medium/High) | none |
109109
| `gpt-5.2` | GPT-5.2 | Yes (Off/Low/Medium/High/Extra High) | low |
110110
| `gpt-5.2-codex` | GPT-5.2-Codex | Yes (None/Low/Medium/High/Extra High) | medium |
111+
| `gpt-5.3-codex` | GPT-5.3-Codex | Yes (None/Low/Medium/High/Extra High) | medium |
111112
| `claude-sonnet-4-5-20250929` | Claude Sonnet 4.5 | Yes (Off/Low/Medium/High) | off |
112113
| `claude-haiku-4-5-20251001` | Claude Haiku 4.5 | Yes (Off/Low/Medium/High) | off |
113114
| `gemini-3-pro-preview` | Gemini 3 Pro | Yes (None/Low/Medium/High) | high |
114115
| `gemini-3-flash-preview` | Gemini 3 Flash | Yes (Minimal/Low/Medium/High) | high |
115116
| `glm-4.7` | Droid Core (GLM-4.7) | None only | none |
116117
| `kimi-k2.5` | Droid Core (Kimi K2.5) | None only | none |
118+
| `minimax-m2.5` | Droid Core (MiniMax M2.5) | Yes (Low/Medium/High) | high |
117119

118120
Custom models configured via [BYOK](/cli/configuration/byok) use the format: `custom:<alias>`
119121

0 commit comments

Comments
 (0)