Skip to content

DoMaLi94/docker-compose-openwebui-ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open WebUI with Ollama

This repository contains a Docker Compose setup for running Open WebUI with Ollama, providing a complete local AI chat interface with GPU acceleration support.

Overview

This setup includes:

  • Open WebUI: A user-friendly web interface for interacting with AI models
  • Ollama: A local AI model server for running large language models
  • GPU Support: NVIDIA GPU acceleration for faster inference

Prerequisites

  • Docker and Docker Compose installed
  • NVIDIA GPU (optional, for GPU acceleration)
  • NVIDIA Container Toolkit (if using GPU)

Quick Start

  1. Clone or download this repository
  2. Navigate to the project directory
  3. Copy the environment file and modify as needed:
    cp .env.example .env
  4. Start the services:
    docker compose up -d
  5. Access Open WebUI at: http://localhost:3000

Configuration

Environment Variables

Copy .env.example to .env and modify the following variables as needed:

  • OPEN_WEBUI_VERSION: Open WebUI version (default: v0.6.26)
  • OLLAMA_VERSION: Ollama version (default: 0.11.8)
  • OPEN_WEBUI_PORT: Host port for Open WebUI (default: 3000)
  • OLLAMA_PORT: Host port for Ollama API (default: 11434)
  • OLLAMA_BASE_URL: Override for external Ollama instance (optional)

Default Settings

  • Open WebUI runs on port 3000
  • Ollama API runs on port 11434
  • GPU acceleration is enabled by default (NVIDIA)

Alternative Ollama Setup

If you're running Ollama on your host machine instead of in a container:

  1. Set OLLAMA_BASE_URL=http://host.docker.internal:11434 in your .env file
  2. Comment out or remove the ollama service section in docker-compose.yaml
  3. Remove the ollama dependency from the open-webui service

Usage

Starting the Services

# Start in detached mode
docker compose up -d

# Start with logs visible
docker compose up

Stopping the Services

docker compose down

Viewing Logs

# View all logs
docker compose logs

# View logs for specific service
docker compose logs open-webui
docker compose logs ollama

Installing Models

Once the services are running, you can install models through the Open WebUI interface or directly via Ollama:

# Install a model via Ollama container
docker exec -it ollama ollama pull llama2

# List installed models
docker exec -it ollama ollama list

Volumes and Data Persistence

The setup uses named volumes to persist all important data across container restarts and updates:

Volume Details

  • ollama: Stores downloaded AI models, model configurations, and Ollama settings

    • Location: /root/.ollama inside the container
    • Contains: All pulled models (e.g., llama2, codellama, etc.), model metadata, and Ollama configuration
  • open-webui: Stores all Open WebUI application data

    • Location: /app/backend/data inside the container
    • Contains: User accounts, user settings, chat histories, custom prompts, knowledge bases, and application configuration

What Gets Persisted

  • User accounts and authentication data
  • All chat conversations and history
  • User preferences and interface settings
  • Downloaded Ollama models (can be large, 4GB+ per model)
  • Custom prompts and templates
  • Knowledge bases and uploaded documents
  • Application configuration and settings

Important Notes

  • Data persists even when containers are stopped, updated, or recreated
  • Volumes are only deleted when explicitly removed with docker compose down -v
  • To backup your data, you can backup these Docker volumes
  • Models downloaded through Open WebUI interface are automatically stored in the ollama volume

GPU Support

The configuration includes NVIDIA GPU support with the following settings:

  • Driver: nvidia
  • Device IDs: all available GPUs
  • Capabilities: gpu

If you don't have an NVIDIA GPU, you can remove the deploy section from the ollama service.

Troubleshooting

Common Issues

  1. Port conflicts: If the configured ports are already in use, modify OPEN_WEBUI_PORT or OLLAMA_PORT in your .env file

  2. GPU not working: Ensure you have the NVIDIA Container Toolkit installed:

    # Ubuntu/Debian
    sudo apt-get update
    sudo apt-get install -y nvidia-container-toolkit
    sudo systemctl restart docker
  3. Open WebUI can't connect to Ollama: Check that both services are running and the OLLAMA_BASE_URL is correct

  4. Environment file not found: Make sure you've copied .env.example to .env

Health Checks

The Ollama service includes a health check. You can verify the status:

docker compose ps

Accessing the Services

Security Notes

This setup is intended for local private use only and is not production ready.

About

Docker Compose stack for OpenWebUI + Ollama. Self-hosted AI chat interface with GPU support for private local AI.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors