Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
288 changes: 81 additions & 207 deletions content/providers/03-community-providers/100-gemini-cli.mdx
Original file line number Diff line number Diff line change
@@ -1,113 +1,105 @@
---
title: Gemini CLI
description: Learn how to use the Gemini CLI community provider to access Google's Gemini models through the official CLI/SDK.
description: Learn how to use the Gemini CLI provider to access Google's Gemini models.
---

# Gemini CLI Provider

The [ai-sdk-provider-gemini-cli](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli) community provider enables using Google's Gemini models through the [@google/gemini-cli-core](https://www.npmjs.com/package/@google/gemini-cli-core) library and Google Cloud Code endpoints. While it works with both Gemini Code Assist (GCA) licenses and API key authentication, it's particularly useful for developers who want to use their existing GCA subscription rather than paid use API keys.
The [ai-sdk-provider-gemini-cli](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli) community provider enables using Google's Gemini models through the [@google/gemini-cli-core](https://www.npmjs.com/package/@google/gemini-cli-core) library. It's useful for developers who want to use their existing Gemini Code Assist subscription or API key authentication.

## Version Compatibility

The Gemini CLI provider supports both AI SDK v4 and v5-beta:
| Provider Version | AI SDK Version | NPM Tag | Status |
| ---------------- | -------------- | ----------- | ----------- |
| 2.x | v6 | `latest` | Stable |
| 1.x | v5 | `ai-sdk-v5` | Maintenance |
| 0.x | v4 | `ai-sdk-v4` | Legacy |

| Provider Version | AI SDK Version | Status | Branch |
| ---------------- | -------------- | ------ | -------------------------------------------------------------------------------------- |
| 0.x | v4 | Stable | [`ai-sdk-v4`](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli/tree/ai-sdk-v4) |
| 1.x-beta | v5-beta | Beta | [`main`](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli/tree/main) |

## Setup

The Gemini CLI provider is available in the `ai-sdk-provider-gemini-cli` module. Install the version that matches your AI SDK version:

### For AI SDK v5-beta (latest)
```bash
# AI SDK v6 (default)
npm install ai-sdk-provider-gemini-cli ai

<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
<Tab>
<Snippet text="pnpm add ai-sdk-provider-gemini-cli ai" dark />
</Tab>
<Tab>
<Snippet text="npm install ai-sdk-provider-gemini-cli ai" dark />
</Tab>
<Tab>
<Snippet text="yarn add ai-sdk-provider-gemini-cli ai" dark />
</Tab>
# AI SDK v5
npm install ai-sdk-provider-gemini-cli@ai-sdk-v5 ai@^5.0.0

<Tab>
<Snippet text="bun add ai-sdk-provider-gemini-cli ai" dark />
</Tab>
</Tabs>
# AI SDK v4
npm install ai-sdk-provider-gemini-cli@ai-sdk-v4 ai@^4.0.0
```

### For AI SDK v4 (stable)
## Setup

<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
<Tab>
<Snippet text="pnpm add ai-sdk-provider-gemini-cli@^0 ai@^4" dark />
<Snippet text="pnpm add ai-sdk-provider-gemini-cli" dark />
</Tab>
<Tab>
<Snippet text="npm install ai-sdk-provider-gemini-cli@^0 ai@^4" dark />
<Snippet text="npm install ai-sdk-provider-gemini-cli" dark />
</Tab>
<Tab>
<Snippet text="yarn add ai-sdk-provider-gemini-cli@^0 ai@^4" dark />
<Snippet text="yarn add ai-sdk-provider-gemini-cli" dark />
</Tab>

<Tab>
<Snippet text="bun add ai-sdk-provider-gemini-cli@^0 ai@^4" dark />
<Snippet text="bun add ai-sdk-provider-gemini-cli" dark />
</Tab>
</Tabs>

## Provider Instance

You can import `createGeminiProvider` from `ai-sdk-provider-gemini-cli` and create a provider instance with your settings:
Import `createGeminiProvider` and create a provider instance with your authentication settings:

```ts
import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli';

// OAuth authentication (recommended)
const gemini = createGeminiProvider({
authType: 'oauth-personal',
});
// OAuth authentication (default if authType omitted)
const gemini = createGeminiProvider({ authType: 'oauth-personal' });

// API key authentication
const gemini = createGeminiProvider({
authType: 'api-key',
authType: 'api-key', // or 'gemini-api-key'
apiKey: process.env.GEMINI_API_KEY,
});
```

You can use the following settings to customize the Gemini CLI provider instance:

- **authType** _'oauth-personal' | 'api-key' | 'gemini-api-key'_

Required. The authentication method to use.

- `'oauth-personal'`: Uses existing Gemini CLI credentials from `~/.gemini/oauth_creds.json`
- `'api-key'`: Standard AI SDK API key authentication (recommended)
- `'gemini-api-key'`: Gemini-specific API key authentication (identical to `'api-key'`)
// Vertex AI authentication
const gemini = createGeminiProvider({
authType: 'vertex-ai',
vertexAI: {
projectId: 'my-project',
location: 'us-central1',
},
});

Note: `'api-key'` and `'gemini-api-key'` are functionally identical. We recommend using `'api-key'` for consistency with AI SDK standards, but both options map to the same Gemini authentication method internally.
// Google Auth Library
const gemini = createGeminiProvider({
authType: 'google-auth-library',
googleAuth: myGoogleAuthInstance,
});
```

- **apiKey** _string_
Authentication options:

Required when using API key authentication. Your Gemini API key from [Google AI Studio](https://aistudio.google.com/apikey).
- **authType** _'oauth' | 'oauth-personal' | 'api-key' | 'gemini-api-key' | 'vertex-ai' | 'google-auth-library'_ - Optional. Defaults to `'oauth-personal'`.
- **apiKey** _string_ - Required for `'api-key'` / `'gemini-api-key'`.
- **vertexAI** _{ projectId, location }_ - Required for `'vertex-ai'`.
- **googleAuth** _GoogleAuth_ - Required for `'google-auth-library'`.
- **cacheDir** _string_ - Optional directory for OAuth credentials cache.
- **proxy** _string_ - HTTP/HTTPS proxy URL.

## Language Models

You can create models that call Gemini through the CLI using the provider instance.
The first argument is the model ID:
Create models that call Gemini through the CLI using the provider instance:

```ts
const model = gemini('gemini-2.5-pro');
```

Gemini CLI supports the following models:

- **gemini-2.5-pro**: Most capable model for complex tasks (64K output tokens)
- **gemini-2.5-flash**: Faster model for simpler tasks (64K output tokens)
Supported models:

### Example: Generate Text
- **gemini-3-pro-preview**: Latest model with enhanced reasoning (supports `thinkingLevel`)
- **gemini-3-flash-preview**: Fast Gemini 3 model (supports `thinkingLevel`)
- **gemini-2.5-pro**: Production-ready model with 64K output tokens (supports `thinkingBudget`)
- **gemini-2.5-flash**: Fast, efficient model with 64K output tokens (supports `thinkingBudget`)

You can use Gemini CLI language models to generate text with the `generateText` function:
### Example

```ts
import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli';
Expand All @@ -117,182 +109,64 @@ const gemini = createGeminiProvider({
authType: 'oauth-personal',
});

// AI SDK v4
const { text } = await generateText({
model: gemini('gemini-2.5-pro'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

// AI SDK v5-beta
const result = await generateText({
model: gemini('gemini-2.5-pro'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
console.log(result.content[0].text);
```

Gemini CLI language models can also be used in the `streamText`, `generateObject`, and `streamObject` functions
(see [AI SDK Core](/docs/ai-sdk-core) for more information).
### Model Settings

<Note>
The response format differs between AI SDK v4 and v5-beta. In v4, text is
accessed directly via `result.text`. In v5-beta, it's accessed via
`result.content[0].text`. Make sure to use the appropriate format for your AI
SDK version.
</Note>
```ts
const model = gemini('gemini-3-pro-preview', {
temperature: 0.7,
topP: 0.95,
topK: 40,
maxOutputTokens: 8192,
thinkingConfig: {
thinkingLevel: 'medium', // 'low' | 'medium' | 'high' | 'minimal'
},
verbose: true, // Enable debug logging
logger: customLogger, // Custom logger (or false to disable)
});
```

### Model Capabilities

| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
| ------------------ | ------------------- | ------------------- | ------------------- | ------------------- |
| `gemini-2.5-pro` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| `gemini-2.5-flash` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
| ------------------------ | ------------------- | ------------------- | ------------------- | ------------------- |
| `gemini-3-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| `gemini-3-flash-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| `gemini-2.5-pro` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| `gemini-2.5-flash` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

<Note>
Images must be provided as base64-encoded data. Image URLs are not supported
due to the Google Cloud Code endpoint requirements.
Images must be provided as base64-encoded data. Image URLs are not supported.
</Note>

## Authentication

The Gemini CLI provider supports two authentication methods:

### OAuth Authentication (Recommended)

First, install and authenticate the Gemini CLI globally:
Install and authenticate the Gemini CLI globally:

```bash
npm install -g @google/gemini-cli
gemini # Follow the interactive authentication setup
```

Then use OAuth authentication in your code:

```ts
const gemini = createGeminiProvider({
authType: 'oauth-personal',
});
```

This uses your existing Gemini CLI credentials from `~/.gemini/oauth_creds.json`.
Then use OAuth authentication in your code with `authType: 'oauth-personal'`.

### API Key Authentication

1. Generate an API key from [Google AI Studio](https://aistudio.google.com/apikey).

2. Set it as an environment variable in your terminal:

```bash
export GEMINI_API_KEY="YOUR_API_KEY"
```

Replace `YOUR_API_KEY` with your generated key.

3. Use API key authentication in your code:

```ts
const gemini = createGeminiProvider({
authType: 'api-key',
apiKey: process.env.GEMINI_API_KEY,
});
```

<Note>
The Gemini API provides a free tier with 100 requests per day using Gemini 2.5
Pro. You can upgrade to a paid plan for higher rate limits on the [API key
page](https://aistudio.google.com/apikey).
</Note>

## Features

### Structured Object Generation

Generate structured data using Zod schemas:

```ts
import { generateObject } from 'ai';
import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli';
import { z } from 'zod';

const gemini = createGeminiProvider({
authType: 'oauth-personal',
});

const result = await generateObject({
model: gemini('gemini-2.5-pro'),
schema: z.object({
name: z.string().describe('Product name'),
price: z.number().describe('Price in USD'),
features: z.array(z.string()).describe('Key features'),
}),
prompt: 'Generate a laptop product listing',
});

console.log(result.object);
```

### Streaming Responses

Stream text for real-time output:

```ts
import { streamText } from 'ai';
import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli';

const gemini = createGeminiProvider({
authType: 'oauth-personal',
});

const result = await streamText({
model: gemini('gemini-2.5-pro'),
prompt: 'Write a story about a robot learning to paint',
});

// Both v4 and v5 use the same streaming API
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
```

For more examples and features, including tool usage and multimodal input, see the [provider documentation](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli).

## Model Parameters

You can configure model behavior with standard AI SDK parameters:

```ts
// AI SDK v4
const model = gemini('gemini-2.5-pro', {
temperature: 0.7, // Controls randomness (0-2)
maxTokens: 1000, // Maximum output tokens (defaults to 65536)
topP: 0.95, // Nucleus sampling threshold
});

// AI SDK v5-beta
const model = gemini('gemini-2.5-pro', {
temperature: 0.7, // Controls randomness (0-2)
maxOutputTokens: 1000, // Maximum output tokens (defaults to 65536)
topP: 0.95, // Nucleus sampling threshold
});
```

<Note>
In AI SDK v5-beta, the `maxTokens` parameter has been renamed to
`maxOutputTokens`. Make sure to use the correct parameter name for your
version.
</Note>

## Limitations

- Requires Node.js ≥ 18
- OAuth authentication requires the Gemini CLI to be installed globally
- Image URLs not supported (use base64-encoded images)
- Very strict character length constraints in schemas may be challenging
- Some AI SDK parameters not supported: `frequencyPenalty`, `presencePenalty`, `seed`
- Only function tools supported (no provider-defined tools)
2. Set it as an environment variable: `export GEMINI_API_KEY="YOUR_API_KEY"`
3. Use `authType: 'api-key'` with your key.

## Requirements

- Node.js 18 or higher
- Gemini CLI installed globally for OAuth authentication (`npm install -g @google/gemini-cli`)
- Node.js 20 or higher
- Gemini CLI installed globally for OAuth authentication
- Valid Google account or Gemini API key

For more details, see the [provider documentation](https://github.com/ben-vargas/ai-sdk-provider-gemini-cli).
Loading