Skip to content

Commit fab299d

Browse files
committed
feat: Update AI model support and configuration options across providers
1 parent f36ac48 commit fab299d

17 files changed

+140
-152
lines changed

CLAUDE.md

Lines changed: 19 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,13 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
66

77
**Universal Commit Assistant** is a VS Code extension that generates AI-powered commit messages using multiple AI providers. The extension supports 8 languages and follows modern TypeScript development practices with automated release workflows.
88

9-
## Latest Updates (November 2025)
9+
## Latest Updates (March 2026)
1010

11-
- **Qwen Provider Added**: Integration with Alibaba Cloud DashScope API supporting Qwen Max, Qwen Plus, and Qwen Turbo models
12-
- **GPT-5.1 Models**: Updated OpenAI provider with GPT-5.1, GPT-5.1 Codex, and GPT-5.1 Codex Mini
13-
- **Gemini 3 Models**: Updated Google Gemini provider with Gemini 3 Pro (flagship) and preview models
14-
- **Enhanced Model Support**: Updated to support latest AI models from all major providers
11+
- **Gemini 3.1 Pro & 3 Flash**: Added latest Google Gemini models with 1M context window
12+
- **Claude Opus 4.6 & Sonnet 4.6**: Updated Anthropic provider with latest Claude models
13+
- **GPT-5.4**: Added OpenAI GPT-5.4 as new default model
14+
- **Deep Analysis Default**: Increased maxDiffLength to 100k chars for comprehensive code analysis
15+
- **Code Cleanup**: Removed dead code, fixed progress bug, added cloud provider timeouts, added maxTokens to Gemini/Ollama
1516

1617
## Development Commands
1718

@@ -93,13 +94,13 @@ src/
9394
├── providers/ # AI provider implementations
9495
│ ├── aiProviderFactory.ts # Factory for creating provider instances
9596
│ ├── baseProvider.ts # Abstract base class
96-
│ ├── anthropicProvider.ts # Claude 4.5 Haiku/Sonnet/Opus
97+
│ ├── anthropicProvider.ts # Claude Haiku 4.5/Sonnet 4.6/Opus 4.6
9798
│ ├── deepseekProvider.ts # DeepSeek V3.1 models
98-
│ ├── geminiProvider.ts # Google Gemini 3 models
99+
│ ├── geminiProvider.ts # Google Gemini 3.1/3/2.5 models
99100
│ ├── lmstudioProvider.ts # Local LM Studio integration
100101
│ ├── mistralProvider.ts # Mistral AI models
101102
│ ├── ollamaProvider.ts # Local Ollama integration
102-
│ ├── openaiProvider.ts # OpenAI GPT-5.1 models
103+
│ ├── openaiProvider.ts # OpenAI GPT-5.4/5.1 models
103104
│ ├── openrouterProvider.ts # OpenRouter proxy service
104105
│ └── qwenProvider.ts # Alibaba Qwen models
105106
├── services/
@@ -152,26 +153,28 @@ Provider-specific settings use nested structure:
152153
- Cloud providers: `universal-commit-assistant.openai.model`, `universal-commit-assistant.anthropic.model`, `universal-commit-assistant.deepseek.model`, `universal-commit-assistant.qwen.model`
153154
- Local providers: `universal-commit-assistant.ollama.baseUrl`, `universal-commit-assistant.lmstudio.model`
154155

155-
### Latest AI Model Updates (November 2025)
156+
### Latest AI Model Updates (March 2026)
156157

157158
**OpenAI Latest Models:**
158159

159-
- **GPT-5.1** (gpt-5.1) - Latest model balancing intelligence and speed, released November 2025
160+
- **GPT-5.4** (gpt-5.4) - Latest model, best intelligence and speed (default)
161+
- **GPT-5.1** (gpt-5.1) - Previous generation, excellent balance
160162
- **GPT-5.1 Codex** (gpt-5.1-codex) - Optimized for coding tasks
161163
- **GPT-5.1 Codex Mini** (gpt-5.1-codex-mini) - Fast coding model
162164
- **GPT-5 Mini** (gpt-5-mini) - Fast and cost-effective, 400K context window
163-
- **GPT-5** (gpt-5) - Full capability flagship model
165+
- **GPT-5** (gpt-5) - Previous flagship model
164166

165167
**Anthropic Latest Models:**
166168

167-
- **Claude Haiku 4.5** (claude-haiku-4-5-20251001) - Fast and cost-effective, released October 2025, 200K context window
168-
- **Claude Sonnet 4.5** (claude-sonnet-4-5-20250929) - Enhanced reasoning and coding capabilities, released September 2025
169-
- **Claude Opus 4.1** (claude-opus-4-1-20250805) - Most capable model with extended thinking, released August 2025
169+
- **Claude Haiku 4.5** (claude-haiku-4-5-20251001) - Fast and cost-effective (default), 200K context window
170+
- **Claude Sonnet 4.6** (claude-sonnet-4-6) - Enhanced reasoning and coding capabilities
171+
- **Claude Opus 4.6** (claude-opus-4-6) - Most capable model with extended thinking
170172

171173
**Google Gemini Latest Models:**
172174

173-
- **Gemini 3 Pro** (gemini-3-pro) - Latest flagship model with 1M context window, released November 2025
174-
- **Gemini 3 Pro Preview** (gemini-3-pro-preview-11-2025) - Preview version of Gemini 3
175+
- **Gemini 3.1 Pro** (gemini-3.1-pro-preview) - Reasoning-first model with 1M context (default)
176+
- **Gemini 3 Flash** (gemini-3-flash-preview) - Fast with strong coding and reasoning
177+
- **Gemini 3 Pro** (gemini-3-pro) - Previous flagship (now aliases to 3.1 Pro)
175178
- **Gemini 2.5 Flash** (gemini-2.5-flash) - Fast and cost-effective
176179
- **Gemini 2.5 Pro** (gemini-2.5-pro) - Advanced reasoning with thinking mode
177180

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,9 @@ Choose the provider that fits your needs and budget:
4545

4646
| Provider | Best For | Latest Models |
4747
| ----------------- | --------------------- | -------------------------------------------- |
48-
| **OpenAI** | General purpose | GPT-5.1, GPT-5.1 Codex, GPT-5 |
49-
| **Anthropic** | Code understanding | Claude Haiku 4.5, Sonnet 4.5, Opus 4.1 |
50-
| **Google Gemini** | Fast responses | Gemini 3 Pro, Gemini 2.5 Flash |
48+
| **OpenAI** | General purpose | GPT-5.4, GPT-5.1, GPT-5.1 Codex, GPT-5 |
49+
| **Anthropic** | Code understanding | Claude Haiku 4.5, Sonnet 4.6, Opus 4.6 |
50+
| **Google Gemini** | Fast responses | Gemini 3.1 Pro, Gemini 3 Flash |
5151
| **Mistral** | European compliance | Mistral Small, Mistral Large |
5252
| **DeepSeek** | Cost-effective | DeepSeek-V3.1 Chat & Reasoner |
5353
| **Qwen** | Alibaba Cloud | Qwen Max, Qwen Plus, Qwen Turbo |
@@ -81,7 +81,7 @@ Access via `Settings` > `Extensions` > `Universal Commit Assistant`:
8181

8282
- **Temperature**: Control creativity (0 = consistent, 2 = creative)
8383
- **Max Tokens**: Message length limit (100-500)
84-
- **Max Diff Length**: Maximum characters of git diff to send to AI (1000-10000, default: 3000)
84+
- **Max Diff Length**: Maximum characters of git diff to send to AI (1000-100000, default: 100000)
8585
- **Detect First Commit**: Automatically detect and generate appropriate initial commit messages (default: enabled)
8686
- **Custom Prompt**: Override default instructions
8787

@@ -164,18 +164,18 @@ Based on enhanced analysis (~2,500 input tokens + 150 output tokens average):
164164

165165
| Provider | Model | Cost per Commit | Cost per 100 Commits |
166166
| -------------------- | ------------------ | --------------- | -------------------- |
167-
| **OpenAI** | GPT-5.1 | ~$0.0008 | ~$0.08 |
167+
| **OpenAI** | GPT-5.4 | ~$0.0008 | ~$0.08 |
168168
| **OpenAI** | GPT-5.1 Codex | ~$0.005 | ~$0.50 |
169169
| **Anthropic** | Claude Haiku 4.5 | ~$0.003 | ~$0.30 |
170-
| **Anthropic** | Claude Sonnet 4.5 | ~$0.010 | ~$1.00 |
170+
| **Anthropic** | Claude Sonnet 4.6 | ~$0.010 | ~$1.00 |
171171
| **DeepSeek** | deepseek-chat | ~$0.0009 | ~$0.09 |
172-
| **Gemini** | Gemini 3 Pro | ~$0.0003 | ~$0.03 |
172+
| **Gemini** | Gemini 3.1 Pro | ~$0.0003 | ~$0.03 |
173173
| **Qwen** | Qwen Plus | ~$0.0006 | ~$0.06 |
174174
| **Ollama/LM Studio** | Local Models | **FREE** | **FREE** |
175175

176176
### Cost Optimization Tips
177177

178-
1. **Use Cost-Effective Providers**: Gemini 3 Pro, DeepSeek, Qwen, and GPT-5.1 offer excellent quality at low cost
178+
1. **Use Cost-Effective Providers**: Gemini 3 Pro, DeepSeek, Qwen, and GPT-5.4 offer excellent quality at low cost
179179
2. **Adjust Max Diff Length**: Reduce `maxDiffLength` to 2000 for simpler commits
180180
3. **Local Models**: Use Ollama or LM Studio for completely free operation
181181
4. **Smart Defaults**: The extension uses intelligent truncation to minimize tokens while preserving context

SETTINGS.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -109,29 +109,33 @@ Complete configuration guide for customizing Universal Commit Assistant to your
109109

110110
### OpenAI
111111
```json
112-
"universal-commit-assistant.openai.model": "gpt-5-mini"
112+
"universal-commit-assistant.openai.model": "gpt-5.4"
113113
```
114114
**Available Models**:
115-
- `gpt-5-mini`: Fast and cost-effective (recommended)
116-
- `gpt-5`: Flagship model with state-of-the-art capabilities
115+
- `gpt-5.4`: Latest model, best intelligence and speed (recommended)
116+
- `gpt-5.1`: Previous generation, excellent balance
117+
- `gpt-5.1-codex`: Optimized for coding tasks
118+
- `gpt-5-mini`: Fast and cost-effective
119+
- `gpt-5`: Previous flagship model
117120

118121
### Anthropic
119122
```json
120123
"universal-commit-assistant.anthropic.model": "claude-haiku-4-5-20251001"
121124
```
122125
**Popular Models**:
123126
- `claude-haiku-4-5-20251001`: Fast and cost-effective (recommended)
124-
- `claude-sonnet-4-5-20250929`: Enhanced reasoning and coding capabilities
125-
- `claude-opus-4-1-20250805`: Most capable with extended thinking
127+
- `claude-sonnet-4-6`: Enhanced reasoning and coding capabilities
128+
- `claude-opus-4-6`: Most capable with extended thinking
126129

127130
### Google Gemini
128131
```json
129-
"universal-commit-assistant.gemini.model": "gemini-1.5-flash"
132+
"universal-commit-assistant.gemini.model": "gemini-3.1-pro-preview"
130133
```
131134
**Popular Models**:
132-
- `gemini-1.5-flash`: Fast and efficient
133-
- `gemini-2.0-flash-exp`: Latest experimental model
134-
- `gemini-1.5-pro`: More capable version
135+
- `gemini-3.1-pro-preview`: Reasoning-first model with 1M context (recommended)
136+
- `gemini-3-flash-preview`: Fast with strong coding and reasoning
137+
- `gemini-3-pro`: Previous flagship (aliases to 3.1 Pro)
138+
- `gemini-2.5-flash`: Fast and cost-effective
135139

136140
### Mistral AI
137141
```json
@@ -170,10 +174,10 @@ Complete configuration guide for customizing Universal Commit Assistant to your
170174

171175
### OpenRouter
172176
```json
173-
"universal-commit-assistant.openrouter.model": "openai/gpt-5-mini"
177+
"universal-commit-assistant.openrouter.model": "openai/gpt-5.4"
174178
```
175179
**Popular Models**:
176-
- `openai/gpt-5-mini`: OpenAI via OpenRouter
180+
- `openai/gpt-5.4`: OpenAI via OpenRouter
177181
- `anthropic/claude-haiku-4-5`: Anthropic via OpenRouter
178182
- `google/gemini-2.0-flash-exp`: Google via OpenRouter
179183
- `qwen/qwen-2.5-coder-32b-instruct`: Specialized coding model
@@ -207,15 +211,15 @@ Access via Command Palette (`Ctrl+Shift+P`):
207211
}
208212
```
209213

210-
### Detailed Commits with GPT-5
214+
### Detailed Commits with GPT-5.4
211215
```json
212216
{
213217
"universal-commit-assistant.provider": "openai",
214218
"universal-commit-assistant.messageStyle": "detailed",
215219
"universal-commit-assistant.temperature": 0.4,
216220
"universal-commit-assistant.maxTokens": 200,
217221
"universal-commit-assistant.language": "english",
218-
"universal-commit-assistant.openai.model": "gpt-5"
222+
"universal-commit-assistant.openai.model": "gpt-5.4"
219223
}
220224
```
221225

@@ -237,7 +241,7 @@ Access via Command Palette (`Ctrl+Shift+P`):
237241
"universal-commit-assistant.messageStyle": "conventional",
238242
"universal-commit-assistant.language": "chinese",
239243
"universal-commit-assistant.temperature": 0.3,
240-
"universal-commit-assistant.gemini.model": "gemini-1.5-flash"
244+
"universal-commit-assistant.gemini.model": "gemini-3.1-pro-preview"
241245
}
242246
```
243247

@@ -287,12 +291,12 @@ Access via Command Palette (`Ctrl+Shift+P`):
287291

288292
#### Faster Responses
289293
- Use local providers (Ollama/LM Studio)
290-
- Choose faster models (gpt-5-mini, claude-haiku-4-5)
294+
- Choose faster models (gpt-5.4, claude-haiku-4-5)
291295
- Reduce maxTokens for shorter messages
292296
- Set lower temperature for more deterministic output
293297

294298
#### Better Quality
295-
- Use premium models (gpt-5, claude-sonnet-4-5, claude-opus-4-1)
299+
- Use premium models (gpt-5.4, claude-sonnet-4-6, claude-opus-4-6)
296300
- Include unstaged changes for more context
297301
- Customize system prompt for your coding style
298302
- Use detailed message style for complex changes

package.json

Lines changed: 19 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -181,10 +181,10 @@
181181
},
182182
"universal-commit-assistant.maxDiffLength": {
183183
"type": "number",
184-
"default": 3000,
184+
"default": 100000,
185185
"minimum": 1000,
186-
"maximum": 10000,
187-
"description": "Maximum characters of git diff to send to AI (uses smart truncation for larger diffs)",
186+
"maximum": 100000,
187+
"description": "Maximum characters of git diff to send to AI for deep analysis (uses smart truncation for larger diffs)",
188188
"order": 8
189189
},
190190
"universal-commit-assistant.detectFirstCommit": {
@@ -206,20 +206,22 @@
206206
"properties": {
207207
"universal-commit-assistant.openai.model": {
208208
"type": "string",
209-
"default": "gpt-5.1",
209+
"default": "gpt-5.4",
210210
"enum": [
211+
"gpt-5.4",
211212
"gpt-5.1",
212213
"gpt-5.1-codex",
213214
"gpt-5.1-codex-mini",
214215
"gpt-5-mini",
215216
"gpt-5"
216217
],
217218
"enumDescriptions": [
218-
"GPT-5.1 - Latest model balancing intelligence and speed (recommended)",
219+
"GPT-5.4 - Latest model, best intelligence and speed (recommended)",
220+
"GPT-5.1 - Previous generation, excellent balance",
219221
"GPT-5.1 Codex - Optimized for coding tasks",
220222
"GPT-5.1 Codex Mini - Fast coding model",
221223
"GPT-5 Mini - Fast and cost-effective",
222-
"GPT-5 - Full capability flagship model"
224+
"GPT-5 - Previous flagship model"
223225
],
224226
"description": "OpenAI model to use"
225227
},
@@ -228,28 +230,30 @@
228230
"default": "claude-haiku-4-5-20251001",
229231
"enum": [
230232
"claude-haiku-4-5-20251001",
231-
"claude-sonnet-4-5-20250929",
232-
"claude-opus-4-1-20250805"
233+
"claude-sonnet-4-6",
234+
"claude-opus-4-6"
233235
],
234236
"enumDescriptions": [
235237
"Claude Haiku 4.5 - Fast and cost-effective (recommended)",
236-
"Claude Sonnet 4.5 - Enhanced reasoning and coding capabilities",
237-
"Claude Opus 4.1 - Most capable model with extended thinking"
238+
"Claude Sonnet 4.6 - Enhanced reasoning and coding capabilities",
239+
"Claude Opus 4.6 - Most capable model with extended thinking"
238240
],
239241
"description": "Anthropic model to use"
240242
},
241243
"universal-commit-assistant.gemini.model": {
242244
"type": "string",
243-
"default": "gemini-3-pro",
245+
"default": "gemini-3.1-pro-preview",
244246
"enum": [
247+
"gemini-3.1-pro-preview",
248+
"gemini-3-flash-preview",
245249
"gemini-3-pro",
246-
"gemini-3-pro-preview-11-2025",
247250
"gemini-2.5-flash",
248251
"gemini-2.5-pro"
249252
],
250253
"enumDescriptions": [
251-
"Gemini 3 Pro - Latest flagship model with 1M context (recommended)",
252-
"Gemini 3 Pro Preview - Preview version of Gemini 3",
254+
"Gemini 3.1 Pro - Reasoning-first model with 1M context (recommended)",
255+
"Gemini 3 Flash - Fast with strong coding and reasoning",
256+
"Gemini 3 Pro - Previous flagship (now aliases to 3.1 Pro)",
253257
"Gemini 2.5 Flash - Fast and cost-effective",
254258
"Gemini 2.5 Pro - Advanced reasoning with thinking mode"
255259
],
@@ -262,7 +266,7 @@
262266
},
263267
"universal-commit-assistant.openrouter.model": {
264268
"type": "string",
265-
"default": "openai/gpt-5-mini",
269+
"default": "openai/gpt-5.4",
266270
"description": "OpenRouter model to use"
267271
},
268272
"universal-commit-assistant.deepseek.model": {

src/providers/anthropicProvider.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,9 @@ export class AnthropicProvider extends BaseProvider {
4545
headers: {
4646
"x-api-key": apiKey,
4747
"Content-Type": "application/json",
48-
"anthropic-version": "2023-06-01",
48+
"anthropic-version": "2024-10-22",
4949
},
50+
timeout: 30000,
5051
}
5152
);
5253

src/providers/deepseekProvider.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ export class DeepSeekProvider extends BaseProvider {
4949
Authorization: `Bearer ${apiKey}`,
5050
"Content-Type": "application/json",
5151
},
52+
timeout: 30000,
5253
}
5354
);
5455

src/providers/geminiProvider.ts

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ export class GeminiProvider extends BaseProvider {
2222
const style = options?.style || this.configManager.getMessageStyle();
2323
const language = this.configManager.getLanguage();
2424

25+
const maxTokens = options?.maxTokens || (style === "detailed" ? 300 : this.configManager.getMaxTokens());
2526
const userPrompt = this.buildPrompt(changes, style, options?.customPrompt, language, options?.isFirstCommit);
2627
const fullPrompt = `${systemPrompt}\n\nUser request: ${userPrompt}\n\nPlease respond with ONLY the commit message, no explanations or additional text.`;
2728

@@ -40,13 +41,15 @@ export class GeminiProvider extends BaseProvider {
4041
],
4142
generationConfig: {
4243
temperature: temperature,
44+
maxOutputTokens: maxTokens,
4345
},
4446
},
4547
{
4648
headers: {
4749
"x-goog-api-key": apiKey,
4850
"Content-Type": "application/json",
4951
},
52+
timeout: 30000,
5053
}
5154
);
5255

src/providers/mistralProvider.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ export class MistralProvider extends BaseProvider {
4949
Authorization: `Bearer ${apiKey}`,
5050
"Content-Type": "application/json",
5151
},
52+
timeout: 30000,
5253
}
5354
);
5455

src/providers/ollamaProvider.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ export class OllamaProvider extends BaseProvider {
1515
const systemPrompt = this.configManager.getSystemPrompt();
1616
const style = options?.style || this.configManager.getMessageStyle();
1717
const language = this.configManager.getLanguage();
18+
const maxTokens = options?.maxTokens || (style === "detailed" ? 300 : this.configManager.getMaxTokens());
1819

1920
const userPrompt = this.buildPrompt(changes, style, options?.customPrompt, language, options?.isFirstCommit);
2021
const fullPrompt = `${systemPrompt}\n\nUser request: ${userPrompt}\n\nPlease respond with ONLY the commit message, no explanations or additional text.`;
@@ -28,6 +29,7 @@ export class OllamaProvider extends BaseProvider {
2829
stream: false,
2930
options: {
3031
temperature: temperature,
32+
num_predict: maxTokens,
3133
},
3234
},
3335
{

src/providers/openaiProvider.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ export class OpenAIProvider extends BaseProvider {
4949
Authorization: `Bearer ${apiKey}`,
5050
"Content-Type": "application/json",
5151
},
52+
timeout: 30000,
5253
}
5354
);
5455

0 commit comments

Comments
 (0)