You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: document Gemini and VertexAI provider setup and usage
- Add documentation for using both Gemini API and VertexAI Gemini as providers, including configuration options and example commands
- Introduce a configuration options table detailing required parameters for Gemini and VertexAI backends
- Provide separate example configuration commands for Gemini API and VertexAI Gemini usage
- Replace the sequence diagram with a flowchart to better illustrate the interaction between user, CodeGPT, Gemini API, and VertexAI
- Update all three README files (English, Simplified Chinese, Traditional Chinese) to reflect these improvements and additions
Signed-off-by: Bo-Yi Wu <[email protected]>
-[Support for Anthropic API Service](#support-for-anthropic-api-service)
29
32
-[How to Change to Groq API Service](#how-to-change-to-groq-api-service)
30
33
-[How to Change to Ollama API Service](#how-to-change-to-ollama-api-service)
@@ -235,33 +238,67 @@ codegpt config set openai.model xxxxx-gpt-4o
235
238
236
239
### Support for [Gemini][60] API Service
237
240
238
-
Build with the Gemini API, you can see the [Gemini API documentation][61]. Update the `provider` and `api_key` in your config file. Please create an API key from the [Gemini API][62] page.
241
+
You can use the Gemini API or VertexAI Gemini service. See the [Gemini API documentation][61] and [VertexAI documentation][63].
242
+
Update the following parameters in your config file.
243
+
244
+
- Please create an API key from the [Gemini API][62] page (for BackendGeminiAPI) or from [VertexAI API Key][64] (for BackendVertexAI).
245
+
246
+
#### Configuration Options
247
+
248
+
| Option | Description | Example Value | Required | Default |
|**openai.provider**| Set to `gemini` to use Gemini provider |`gemini`| Yes ||
251
+
|**gemini.api_key**| API key for Gemini or VertexAI |`xxxxxxx`| Yes ||
252
+
|**gemini.model**| Model name (see [Gemini models][61]) |`gemini-2.0-flash`| Yes ||
253
+
|**gemini.backend**| Gemini backend: `BackendGeminiAPI` (default, for Gemini API) or `BackendVertexAI` (for VertexAI) |`BackendGeminiAPI`| No |`BackendGeminiAPI`|
254
+
|**gemini.project**| VertexAI project ID (required if using `BackendVertexAI`) |`my-gcp-project`| Cond. ||
255
+
|**gemini.location**| VertexAI location (required if using `BackendVertexAI`) |`us-central1`| Cond. ||
256
+
257
+
#### Example: Gemini API (default backend)
239
258
240
259
```sh
241
260
codegpt config set openai.provider gemini
242
-
codegpt config set openai.api_key xxxxxxx
243
261
codegpt config set openai.model gemini-2.0-flash
262
+
codegpt config set gemini.api_key xxxxxxx
263
+
# gemini.backend defaults to BackendGeminiAPI, so you can omit it
0 commit comments