This project is a modification of the official Gemini CLI to use it with a local Ollama installation.
- Added new authentication option "Use Local Ollama"
- Implemented Ollama ContentGenerator for communication with local Ollama models
- Configured default models for Ollama
- Added automatic model detection and switching
-
Install and start Ollama:
# Download and install Ollama (see https://ollama.com) # After installation: ollama serve # Start Ollama server (runs by default on http://localhost:11434)
-
Download a model:
ollama pull llama3.1:8b # Or any other model of your choice
The CLI can be configured using environment variables for different Ollama setups:
-
Configure Ollama Base URL:
# Default: http://localhost:11434 export OLLAMA_BASE_URL=http://localhost:11434 # For remote Ollama server: export OLLAMA_BASE_URL=http://192.168.1.100:11434 # For different port: export OLLAMA_BASE_URL=http://localhost:8080
-
Set default model (optional):
export OLLAMA_MODEL=llama3.1:8b # Default model
-
Custom System Prompts:
# Use custom system prompt from .gemini/system.md export GEMINI_SYSTEM_MD=true # Use system prompt from custom file export GEMINI_SYSTEM_MD=/path/to/your/system.md # Create default system prompt file (one-time setup) GEMINI_WRITE_SYSTEM_MD=true npm start
-
Start Gemini CLI:
cd /path/to/gemini-cli npm startOr with custom configuration:
# With custom Ollama server and system prompt OLLAMA_BASE_URL=http://your-server:11434 GEMINI_SYSTEM_MD=true npm start -
First-time usage:
- Select "Use Local Ollama" as authentication method
- The CLI will automatically connect to your Ollama installation
- The CLI automatically detects available models and selects the first one
The CLI supports custom system prompts for better control over AI behavior:
-
Create a custom system prompt:
# Generate the default system prompt file GEMINI_WRITE_SYSTEM_MD=true npm start # This creates .gemini/system.md with the default prompt
-
Edit the system prompt:
# Edit .gemini/system.md to customize the AI's behavior nano .gemini/system.md -
Use the custom system prompt:
# Enable custom system prompt loading export GEMINI_SYSTEM_MD=true npm start
Important Notes:
- The system prompt is loaded once at startup and used for all interactions
- You must restart the CLI after modifying the system prompt file
- The environment variable
GEMINI_SYSTEM_MD=trueis required to load.gemini/system.md - Without this variable, the CLI uses the built-in default system prompt
- ✅ Chat functionality with local models
- ✅ Streaming responses
- ✅ Full tool integration (file system, shell, etc.) via Function Calling
- ✅ Token estimation (approximate)
- ✅ Embedding support (with nomic-embed-text or similar models)
- ✅ Dynamic model switching with
/modelcommand - ✅ Automatic model detection at startup (selects first available Ollama model)
- ✅ Custom system prompts support
/model- Shows all available models and the current model/model <model-name>- Switches to the specified model/modelsor/m- Short form for/model
Examples:
/model # List all available models
/model llama3.1:8b # Switch to llama3.1:8b
/model llama2:7b # Switch to llama2:7b
The command also supports tab completion for model names!
- Base URL: http://localhost:11434
- Default model: llama3.1:8b
- Embedding model: nomic-embed-text
The CLI works with all models available in your Ollama installation. Popular options:
- llama3.1:8b, llama3.1:13b, llama3.1:70b
- llama2:7b, llama2:13b
- codellama:7b, codellama:13b
- mistral:7b
- and many others...
- Make sure Ollama is running:
ollama listshould display your models - Check the URL: By default, Ollama runs on http://localhost:11434
- Check available models:
ollama list - Download a model:
ollama pull <model-name>
- Larger models require more RAM and CPU
- For better performance, use smaller models like llama3.1:8b
The main changes are located in:
packages/core/src/core/ollamaContentGenerator.ts- Ollama API integrationpackages/core/src/core/contentGenerator.ts- Auth-Type and factory updatespackages/core/src/utils/ollamaUtils.ts- Ollama utility functionspackages/cli/src/ui/components/AuthDialog.tsx- UI for Ollama optionpackages/cli/src/config/auth.ts- Authentication validationpackages/cli/src/ui/commands/modelCommand.ts- Model management command
This modification is based on the official Google Gemini CLI: https://github.com/google-gemini/gemini-cli