Skip to content

agentifyanchor/ollama-cpu-docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 

Repository files navigation

Ollama Docker Compose Setup πŸš€

This Docker Compose configuration sets up two Ollama services using the ollama/ollama image.

Services πŸ§‘β€πŸ’»

1. Main Ollama API Service (ollama-cpu) πŸ”Œ:

  • Purpose: Runs the Ollama API server exposed on port 11434.
  • Health Check: Includes a health check to ensure the service is running.
  • Resources: Allocates 8.5GiB of memory.
  • Restart Policy: Restarts unless stopped.

2. Model Initialization Service (ollama-pull-llama-cpu) πŸ—οΈ:

  • Purpose: Pulls Llama models (like llama3.2, llama2, and nomic-embed-text) on container startup.
  • Dependencies: Depends on the main Ollama service (ollama-cpu) to ensure models are downloaded before starting.
  • Restart Policy: Does not restart after execution.

Configuration Details βš™οΈ:

  • Volumes: Both services use a shared volume (ollama_storage) to store data.
  • Networks: The services are connected via a custom network (demo).
  • Profiles: Both services are configured for CPU usage.

This configuration is designed for setting up Ollama in a CPU environment, initializing models on startup, and serving them via the API.

Run the Services πŸ’»

To start the services with the CPU profile, use the following command:

docker-compose --profile cpu up

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published