Skip to content

Commit 4fdc677

Browse files
committed
Add support for Gemini 2.0-flash and 2.5-pro models
Signed-off-by: JR <jrchang11015043@gmail.com>
1 parent b560f08 commit 4fdc677

File tree

3 files changed

+27
-20
lines changed

3 files changed

+27
-20
lines changed

README.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -49,12 +49,11 @@ For more information, please refer to this [link](https://docs.docker.com/refere
4949
You can specify the Gemini model version using the environment variable `GOOGLE_GEMINI` in your `.env` file.
5050
The following models are supported:
5151

52-
| Environment Value | Model Name | Description |
53-
|------------------|------------------|-------------|
54-
| `1_pro` | `gemini-pro` | Legacy Gemini 1 Pro model (Vertex AI / Generative AI Studio). |
55-
| `1.5_flash` | `gemini-1.5-flash` | Lightweight, faster model suitable for low-latency tasks. |
56-
| `1.5_pro` | `gemini-1.5-pro` | More capable model for complex reasoning and higher-quality outputs. |
57-
| `2.5_flash` | `gemini-2.5-flash` | Latest generation, faster and more accurate than 1.5_flash. |
52+
| Environment Value | Model Name | Description |
53+
| :--- | :--- | :--- |
54+
| `2.0_flash` | `gemini-2.0-flash` | Next-generation lightweight model with improved speed and efficiency. |
55+
| `2.5_flash` | `gemini-2.5-flash` | Latest generation, faster and more accurate than 1.5_flash. |
56+
| `2.5_pro` | `gemini-2.5-pro` | Latest generation, state-of-the-art model for the most demanding tasks. |
5857

5958
Set the model by updating your `.env` file:
6059
```bash

backend/src/api/routers/chains.py

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -54,14 +54,18 @@
5454
llm = ChatOllama(model=model_name, temperature=llm_temp)
5555

5656
elif os.getenv("LLM_MODEL") == "gemini":
57-
if os.getenv("GOOGLE_GEMINI") == "1_pro":
58-
llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=llm_temp)
59-
elif os.getenv("GOOGLE_GEMINI") == "1.5_flash":
60-
llm = ChatVertexAI(model_name="gemini-1.5-flash", temperature=llm_temp)
61-
elif os.getenv("GOOGLE_GEMINI") == "1.5_pro":
62-
llm = ChatVertexAI(model_name="gemini-1.5-pro", temperature=llm_temp)
63-
elif os.getenv("GOOGLE_GEMINI") == "2.5_flash":
57+
gemini_model = os.getenv("GOOGLE_GEMINI")
58+
if gemini_model in {"1_pro", "1.5_flash", "1.5_pro"}:
59+
raise ValueError(
60+
f"The selected Gemini model '{gemini_model}' (version 1.0–1.5) is disabled. "
61+
"Please upgrade to version 2.0 or higher (e.g., 2.0_flash, 2.5_pro)."
62+
)
63+
elif gemini_model == "2.0_flash":
64+
llm = ChatVertexAI(model_name="gemini-2.0-flash", temperature=llm_temp)
65+
elif gemini_model == "2.5_flash":
6466
llm = ChatVertexAI(model_name="gemini-2.5-flash", temperature=llm_temp)
67+
elif gemini_model == "2.5_pro":
68+
llm = ChatVertexAI(model_name="gemini-2.5-pro", temperature=llm_temp)
6569
else:
6670
raise ValueError("GOOGLE_GEMINI environment variable not set to a valid value.")
6771

backend/src/api/routers/graphs.py

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -75,14 +75,18 @@
7575
llm = ChatOllama(model=model_name, temperature=llm_temp)
7676

7777
elif os.getenv("LLM_MODEL") == "gemini":
78-
if os.getenv("GOOGLE_GEMINI") == "1_pro":
79-
llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=llm_temp)
80-
elif os.getenv("GOOGLE_GEMINI") == "1.5_flash":
81-
llm = ChatVertexAI(model_name="gemini-1.5-flash", temperature=llm_temp)
82-
elif os.getenv("GOOGLE_GEMINI") == "1.5_pro":
83-
llm = ChatVertexAI(model_name="gemini-1.5-pro", temperature=llm_temp)
84-
elif os.getenv("GOOGLE_GEMINI") == "2.5_flash":
78+
gemini_model = os.getenv("GOOGLE_GEMINI")
79+
if gemini_model in {"1_pro", "1.5_flash", "1.5_pro"}:
80+
raise ValueError(
81+
f"The selected Gemini model '{gemini_model}' (version 1.0–1.5) is disabled. "
82+
"Please upgrade to version 2.0 or higher (e.g., 2.0_flash, 2.5_pro)."
83+
)
84+
elif gemini_model == "2.0_flash":
85+
llm = ChatVertexAI(model_name="gemini-2.0-flash", temperature=llm_temp)
86+
elif gemini_model == "2.5_flash":
8587
llm = ChatVertexAI(model_name="gemini-2.5-flash", temperature=llm_temp)
88+
elif gemini_model == "2.5_pro":
89+
llm = ChatVertexAI(model_name="gemini-2.5-pro", temperature=llm_temp)
8690
else:
8791
raise ValueError("GOOGLE_GEMINI environment variable not set to a valid value.")
8892

0 commit comments

Comments
 (0)