Skip to content

Commit fe08522

Browse files
wwwillchenclaude
andauthored
Add add-models skill (#2786)
## Summary - Add new skill definition for under . - Document required workflow for adding model configurations from official docs. ## Test plan - npm run fmt - npm run lint:fix - npm run ts 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- devin-review-badge-begin --> --- <a href="https://app.devin.ai/review/dyad-sh/dyad/pull/2786" target="_blank"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1"> <img src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1" alt="Open with Devin"> </picture> </a> <!-- devin-review-badge-end --> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 49c4c5c commit fe08522

File tree

4 files changed

+145
-11
lines changed

4 files changed

+145
-11
lines changed

.claude/skills/add-models/SKILL.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
---
2+
name: dyad:add-models
3+
description: Add one or more AI models to the language model constants file, researching specs from official docs.
4+
---
5+
6+
# Add Models
7+
8+
Add one or more AI models to `src/ipc/shared/language_model_constants.ts`, researching correct specifications from official documentation.
9+
10+
## Arguments
11+
12+
- `$ARGUMENTS`: Comma-separated list of model names to add (e.g., "gemini 3.1 pro, glm 5, sonnet 4.6").
13+
14+
## Instructions
15+
16+
1. **Parse the model list:**
17+
18+
Split `$ARGUMENTS` by commas to get individual model names. Trim whitespace from each.
19+
20+
2. **Read the current constants file:**
21+
22+
Read `src/ipc/shared/language_model_constants.ts` to understand:
23+
- Which providers exist and their current model entries
24+
- The naming conventions for each provider (e.g., `claude-sonnet-4-20250514` for Anthropic, `gemini-2.5-pro` for Google)
25+
- The structure of `ModelOption` entries (name, displayName, description, maxOutputTokens, contextWindow, temperature, dollarSigns)
26+
27+
3. **Identify the provider for each model:**
28+
29+
Map each model to its provider based on the model name:
30+
- **Anthropic** (`anthropic`): Claude models (Opus, Sonnet, Haiku)
31+
- **OpenAI** (`openai`): GPT models
32+
- **Google** (`google`): Gemini models
33+
- **xAI** (`xai`): Grok models
34+
- **OpenRouter** (`openrouter`): Models from other providers (DeepSeek, Qwen, Moonshot/Kimi, Z-AI/GLM, etc.)
35+
- **Azure** (`azure`): Azure-hosted OpenAI models
36+
- **Bedrock** (`bedrock`): AWS Bedrock-hosted Anthropic models
37+
- **Vertex** (`vertex`): Google Vertex AI-hosted models
38+
39+
If a model could belong to multiple providers (e.g., a new Anthropic model should go in `anthropic` AND potentially `bedrock`), add it to the primary provider. Ask the user if they also want it added to secondary providers.
40+
41+
4. **Research each model's specifications:**
42+
43+
For each model, use WebSearch and WebFetch to look up the official documentation:
44+
- **Anthropic models**: Search `docs.anthropic.com` for model specs
45+
- **OpenAI models**: Search `platform.openai.com/docs/models` for model specs
46+
- **Google Gemini models**: Search `ai.google.dev/gemini-api/docs/models` for model specs
47+
- **xAI models**: Search `docs.x.ai/docs/models` for model specs
48+
- **OpenRouter models**: Search `openrouter.ai/<provider>/<model-name>` for model specs and pricing
49+
50+
For each model, determine:
51+
- **API model name**: The exact string used in API calls (e.g., `claude-sonnet-4-5-20250929`, `gemini-2.5-pro`)
52+
- **Display name**: Human-readable name (e.g., "Claude Sonnet 4.5", "Gemini 2.5 Pro")
53+
- **Description**: Short description following the style of existing entries
54+
- **Max output tokens**: The model's maximum output token limit
55+
- **Context window**: The model's total context window size
56+
- **Temperature**: Default temperature (0 for most models, 1 for OpenAI, 1.0 for Gemini 3.x models)
57+
- **Dollar signs**: Cost tier from 0-6 based on pricing relative to other models in the same provider
58+
59+
**Dollar signs guide** (approximate, based on per-million-token input pricing):
60+
- 0: Free
61+
- 1: Very cheap (<$0.50/M input tokens)
62+
- 2: Cheap ($0.50-$2/M)
63+
- 3: Moderate ($2-$8/M)
64+
- 4: Expensive ($8-$15/M)
65+
- 5: Very expensive ($15-$30/M)
66+
- 6: Premium ($30+/M)
67+
68+
5. **Follow provider-specific conventions:**
69+
70+
Match the patterns used by existing entries:
71+
- **OpenAI**: `maxOutputTokens: undefined` (OpenAI errors with `max_tokens`), `temperature: 1`
72+
- **Anthropic**: `maxOutputTokens: 32_000`, `temperature: 0`
73+
- **Google**: `maxOutputTokens: 65_536 - 1` (exclusive upper bound for Vertex), `temperature` varies
74+
- **OpenRouter**: `maxOutputTokens: 32_000`, prefix model name with provider (e.g., `deepseek/deepseek-chat-v3.1`)
75+
- **Azure**: `maxOutputTokens` commented out, `temperature: 1`
76+
- **Bedrock**: Model names use ARN format (e.g., `us.anthropic.claude-sonnet-4-5-20250929-v1:0`)
77+
- **xAI**: `maxOutputTokens: 32_000`, `temperature: 0`
78+
79+
6. **Add the models to the constants file:**
80+
81+
Insert each new model entry into the appropriate provider's array in `MODEL_OPTIONS`. Place new models:
82+
- At the **top** of the provider's array if it's the newest/most capable model
83+
- After existing models of the same family but before older generations
84+
- Add a comment with a link to the model's documentation page
85+
86+
Also check if related arrays need updating:
87+
- `TURBO_MODELS`: If the model has a turbo variant
88+
- `PROVIDERS_THAT_SUPPORT_THINKING`: If adding a new provider that supports thinking
89+
90+
7. **Check for named constant exports:**
91+
92+
If the new model is likely to be referenced elsewhere (like `SONNET_4_5` or `GPT_5_2_MODEL_NAME`), create a named constant export for it. Search the codebase for references to similar constants to determine if one is needed:
93+
94+
```
95+
grep -r "SONNET_4_5\|GPT_5_2_MODEL_NAME\|GEMINI_3_FLASH" src/
96+
```
97+
98+
8. **Verify the changes compile:**
99+
100+
```
101+
npm run ts
102+
```
103+
104+
Fix any type errors if they occur.
105+
106+
9. **Summarize what was added:**
107+
108+
Report to the user:
109+
- Which models were added and to which providers
110+
- The key specs for each (context window, max output, pricing tier)
111+
- Any models that couldn't be found or had ambiguous specifications
112+
- Any decisions that were made (e.g., choosing between model versions)

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

src/ipc/shared/language_model_constants.ts

Lines changed: 28 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ export interface ModelOption {
1818
}
1919

2020
export const GPT_5_2_MODEL_NAME = "gpt-5.2";
21-
export const SONNET_4_5 = "claude-sonnet-4-5-20250929";
21+
export const SONNET_4_6 = "claude-sonnet-4-6";
2222
export const GEMINI_3_FLASH = "gemini-3-flash-preview";
2323

2424
export const MODEL_OPTIONS: Record<string, ModelOption[]> = {
@@ -123,6 +123,18 @@ export const MODEL_OPTIONS: Record<string, ModelOption[]> = {
123123
temperature: 0,
124124
dollarSigns: 6,
125125
},
126+
// https://docs.anthropic.com/en/docs/about-claude/models/overview
127+
{
128+
name: SONNET_4_6,
129+
displayName: "Claude Sonnet 4.6",
130+
description:
131+
"Anthropic's fast and intelligent model (note: >200k tokens is very expensive!)",
132+
// Set to 32k since context window is 1M tokens
133+
maxOutputTokens: 32_000,
134+
contextWindow: 1_000_000,
135+
temperature: 0,
136+
dollarSigns: 5,
137+
},
126138
{
127139
name: "claude-opus-4-5",
128140
displayName: "Claude Opus 4.5",
@@ -135,7 +147,7 @@ export const MODEL_OPTIONS: Record<string, ModelOption[]> = {
135147
dollarSigns: 5,
136148
},
137149
{
138-
name: SONNET_4_5,
150+
name: "claude-sonnet-4-5-20250929",
139151
displayName: "Claude Sonnet 4.5",
140152
description:
141153
"Anthropic's best model for coding (note: >200k tokens is very expensive!)",
@@ -157,11 +169,11 @@ export const MODEL_OPTIONS: Record<string, ModelOption[]> = {
157169
},
158170
],
159171
google: [
160-
// https://ai.google.dev/gemini-api/docs/models#gemini-3-pro
172+
// https://ai.google.dev/gemini-api/docs/models/gemini-3.1-pro-preview
161173
{
162-
name: "gemini-3-pro-preview",
163-
displayName: "Gemini 3 Pro (Preview)",
164-
description: "Google's latest Gemini model",
174+
name: "gemini-3.1-pro-preview",
175+
displayName: "Gemini 3.1 Pro (Preview)",
176+
description: "Google's most capable Gemini model",
165177
// See Flash 2.5 comment below (go 1 below just to be safe, even though it seems OK now).
166178
maxOutputTokens: 65_536 - 1,
167179
// Gemini context window = input token + output token
@@ -249,6 +261,16 @@ export const MODEL_OPTIONS: Record<string, ModelOption[]> = {
249261
temperature: 1.0,
250262
dollarSigns: 2,
251263
},
264+
// https://openrouter.ai/minimax/minimax-m2.5
265+
{
266+
name: "minimax/minimax-m2.5",
267+
displayName: "MiniMax M2.5",
268+
description: "Strong cost-effective model for real-world productivity",
269+
maxOutputTokens: 32_000,
270+
contextWindow: 196_608,
271+
temperature: 0,
272+
dollarSigns: 1,
273+
},
252274
{
253275
name: "z-ai/glm-5",
254276
displayName: "GLM 5",

src/ipc/utils/get_model_client.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ import {
1919
FREE_OPENROUTER_MODEL_NAMES,
2020
GEMINI_3_FLASH,
2121
GPT_5_2_MODEL_NAME,
22-
SONNET_4_5,
22+
SONNET_4_6,
2323
} from "../shared/language_model_constants";
2424
import { getLanguageModelProviders } from "../shared/language_model_helpers";
2525
import { LanguageModelProvider } from "@/ipc/types";
@@ -42,7 +42,7 @@ const AUTO_MODELS = [
4242
},
4343
{
4444
provider: "anthropic",
45-
name: SONNET_4_5,
45+
name: SONNET_4_6,
4646
},
4747
{
4848
provider: "google",
@@ -214,7 +214,7 @@ function getProModelClient({
214214
models: [
215215
// openai requires no prefix.
216216
provider.responses(`${GPT_5_2_MODEL_NAME}`, { providerId: "openai" }),
217-
provider(`anthropic/${SONNET_4_5}`, { providerId: "anthropic" }),
217+
provider(`anthropic/${SONNET_4_6}`, { providerId: "anthropic" }),
218218
provider(`gemini/${GEMINI_3_FLASH}`, { providerId: "google" }),
219219
],
220220
}),

0 commit comments

Comments
 (0)