Skip to content

vchrizz/gemini-cli-ollama-v1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini CLI with Ollama

This project is a modification of the official Gemini CLI to use it with a local Ollama installation.

Changes

  • Added new authentication option "Use Local Ollama"
  • Implemented Ollama ContentGenerator for communication with local Ollama models
  • Configured default models for Ollama
  • Added automatic model detection and switching

Installation and Usage

Prerequisites

  1. Install and start Ollama:

    # Download and install Ollama (see https://ollama.com)
    # After installation:
    ollama serve  # Start Ollama server (runs by default on http://localhost:11434)
  2. Download a model:

    ollama pull llama3.1:8b  # Or any other model of your choice

Configuration

Environment Variables (Optional)

The CLI can be configured using environment variables for different Ollama setups:

  1. Configure Ollama Base URL:

    # Default: http://localhost:11434
    export OLLAMA_BASE_URL=http://localhost:11434
    
    # For remote Ollama server:
    export OLLAMA_BASE_URL=http://192.168.1.100:11434
    
    # For different port:
    export OLLAMA_BASE_URL=http://localhost:8080
  2. Set default model (optional):

    export OLLAMA_MODEL=llama3.1:8b  # Default model
  3. Custom System Prompts:

    # Use custom system prompt from .gemini/system.md
    export GEMINI_SYSTEM_MD=true
    
    # Use system prompt from custom file
    export GEMINI_SYSTEM_MD=/path/to/your/system.md
    
    # Create default system prompt file (one-time setup)
    GEMINI_WRITE_SYSTEM_MD=true npm start

Starting the CLI

  1. Start Gemini CLI:

    cd /path/to/gemini-cli
    npm start

    Or with custom configuration:

    # With custom Ollama server and system prompt
    OLLAMA_BASE_URL=http://your-server:11434 GEMINI_SYSTEM_MD=true npm start
  2. First-time usage:

    • Select "Use Local Ollama" as authentication method
    • The CLI will automatically connect to your Ollama installation
    • The CLI automatically detects available models and selects the first one

System Prompt Customization

The CLI supports custom system prompts for better control over AI behavior:

  1. Create a custom system prompt:

    # Generate the default system prompt file
    GEMINI_WRITE_SYSTEM_MD=true npm start
    # This creates .gemini/system.md with the default prompt
  2. Edit the system prompt:

    # Edit .gemini/system.md to customize the AI's behavior
    nano .gemini/system.md
  3. Use the custom system prompt:

    # Enable custom system prompt loading
    export GEMINI_SYSTEM_MD=true
    npm start

Important Notes:

  • The system prompt is loaded once at startup and used for all interactions
  • You must restart the CLI after modifying the system prompt file
  • The environment variable GEMINI_SYSTEM_MD=true is required to load .gemini/system.md
  • Without this variable, the CLI uses the built-in default system prompt

Supported Features

  • ✅ Chat functionality with local models
  • ✅ Streaming responses
  • ✅ Full tool integration (file system, shell, etc.) via Function Calling
  • ✅ Token estimation (approximate)
  • ✅ Embedding support (with nomic-embed-text or similar models)
  • ✅ Dynamic model switching with /model command
  • ✅ Automatic model detection at startup (selects first available Ollama model)
  • ✅ Custom system prompts support

Available Commands

/model - Model Management

  • /model - Shows all available models and the current model
  • /model <model-name> - Switches to the specified model
  • /models or /m - Short form for /model

Examples:

/model                    # List all available models
/model llama3.1:8b       # Switch to llama3.1:8b
/model llama2:7b         # Switch to llama2:7b

The command also supports tab completion for model names!

Default Configuration

Available Models

The CLI works with all models available in your Ollama installation. Popular options:

  • llama3.1:8b, llama3.1:13b, llama3.1:70b
  • llama2:7b, llama2:13b
  • codellama:7b, codellama:13b
  • mistral:7b
  • and many others...

Troubleshooting

Connection Issues

  • Make sure Ollama is running: ollama list should display your models
  • Check the URL: By default, Ollama runs on http://localhost:11434

Model Not Found

  • Check available models: ollama list
  • Download a model: ollama pull <model-name>

Performance Issues

  • Larger models require more RAM and CPU
  • For better performance, use smaller models like llama3.1:8b

Development

The main changes are located in:

  • packages/core/src/core/ollamaContentGenerator.ts - Ollama API integration
  • packages/core/src/core/contentGenerator.ts - Auth-Type and factory updates
  • packages/core/src/utils/ollamaUtils.ts - Ollama utility functions
  • packages/cli/src/ui/components/AuthDialog.tsx - UI for Ollama option
  • packages/cli/src/config/auth.ts - Authentication validation
  • packages/cli/src/ui/commands/modelCommand.ts - Model management command

Original Project

This modification is based on the official Google Gemini CLI: https://github.com/google-gemini/gemini-cli

About

An open-source AI agent that brings the power of Gemini directly into your terminal.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 230