Skip to content

Commit 58edfba

Browse files
feat: add support for gemini-3-pro-preview model (qodo-ai#2202)
* feat: add support for gemini-3-pro-preview model - Add gemini/gemini-3-pro-preview with 1,048,576 max tokens - Add vertex_ai/gemini-3-pro-preview with 1,048,576 max tokens - Add test coverage for both model variants - Update documentation with usage examples for both variants This enables users to utilize Google's Gemini 3 Pro Preview model through both Google AI Studio and Vertex AI providers with full 1M+ token context window support. * refactor: streamline test for gemini-3-pro-preview model - Consolidate test cases for gemini-3-pro-preview into a parameterized test - Remove redundant assertions and simplify the test structure - Ensure both Google AI Studio and Vertex AI variants are covered in a single test This enhances maintainability and readability of the test suite for the gemini-3-pro-preview model.
1 parent 0c681c2 commit 58edfba

File tree

3 files changed

+24
-0
lines changed

3 files changed

+24
-0
lines changed

docs/docs/usage-guide/qodo_merge_models.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ The models supported by Qodo Merge are:
1010
- `anthropic/claude-sonnet-4-5-20250929`
1111
- `vertex_ai/gemini-2.5-pro`
1212
- `vertex_ai/gemini-3-pro-preview`
13+
- `gemini/gemini-3-pro-preview`
1314
- `gpt-5-2025-08-07`
1415
- `gpt-5.2-2025-12-11`
1516

@@ -41,6 +42,13 @@ To restrict Qodo Merge to using `vertex_ai/gemini-3-pro-preview`:
4142
model="vertex_ai/gemini-3-pro-preview"
4243
```
4344

45+
To restrict Qodo Merge to using `gemini/gemini-3-pro-preview`:
46+
47+
```toml
48+
[config]
49+
model="gemini/gemini-3-pro-preview"
50+
```
51+
4452
To restrict Qodo Merge to using `gpt-5-2025-08-07`:
4553

4654
```toml

pr_agent/algo/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,7 @@
8888
'vertex_ai/gemini-2.5-flash-preview-04-17': 1048576,
8989
'vertex_ai/gemini-2.5-flash-preview-05-20': 1048576,
9090
'vertex_ai/gemini-2.5-flash': 1048576,
91+
'vertex_ai/gemini-3-pro-preview': 1048576,
9192
'vertex_ai/gemma2': 8200,
9293
'gemini/gemini-1.5-pro': 1048576,
9394
'gemini/gemini-1.5-flash': 1048576,
@@ -99,6 +100,7 @@
99100
'gemini/gemini-2.5-pro-preview-05-06': 1048576,
100101
'gemini/gemini-2.5-pro-preview-06-05': 1048576,
101102
'gemini/gemini-2.5-pro': 1048576,
103+
'gemini/gemini-3-pro-preview': 1048576,
102104
'codechat-bison': 6144,
103105
'codechat-bison-32k': 32000,
104106
'anthropic.claude-instant-v1': 100000,

tests/unittest/test_get_max_tokens.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,20 @@ def test_model_max_tokens_with__limit(self, monkeypatch):
6666

6767
assert get_max_tokens(model) == expected
6868

69+
@pytest.mark.parametrize("model", [
70+
"gemini/gemini-3-pro-preview",
71+
"vertex_ai/gemini-3-pro-preview",
72+
])
73+
def test_gemini_3_pro_preview(self, monkeypatch, model):
74+
fake_settings = type("", (), {
75+
"config": type("", (), {
76+
"custom_model_max_tokens": 0,
77+
"max_model_tokens": 0,
78+
})()
79+
})()
80+
monkeypatch.setattr(utils, "get_settings", lambda: fake_settings)
81+
assert get_max_tokens(model) == 1048576
82+
6983
@pytest.mark.parametrize(
7084
"model",
7185
[

0 commit comments

Comments
 (0)