Home > Docs > Installation > Setup
This guide provides comprehensive instructions for installing and setting up EverMemOS.
- System Requirements
- Installation Methods
- Docker Installation (Recommended)
- Environment Configuration
- Starting the Server
- Verification
- Troubleshooting
- Next Steps
- Python: 3.10 or higher
- uv: Package manager (will be installed during setup)
- Docker: 20.10+
- Docker Compose: 2.0+
- RAM: At least 4GB available (for Elasticsearch and Milvus)
- Disk Space: At least 10GB free
- RAM: 8GB or more
- CPU: 4 cores or more
- Disk Space: 20GB or more (especially for large datasets)
EverMemOS has been tested on:
- macOS (Intel and Apple Silicon)
- Linux (Ubuntu 20.04+, Debian, etc.)
- Windows (via WSL2)
EverMemOS can be installed in two ways:
- Docker Installation (Recommended) - Use Docker Compose for all dependency services
- Manual Installation - Install and configure each service manually
This guide covers the Docker installation method. For manual installation, see Advanced Installation.
git clone https://github.com/EverMind-AI/EverMemOS.git
cd EverMemOSStart all dependency services (MongoDB, Elasticsearch, Milvus, Redis) with one command:
docker-compose up -dThis will start:
- MongoDB on port 27017
- Elasticsearch on port 19200
- Milvus on port 19530
- Redis on port 6379
See Docker Setup Guide for detailed service configuration.
Check that all services are running:
docker-compose psYou should see all services in the "Up" state.
If you don't have uv installed:
curl -LsSf https://astral.sh/uv/install.sh | shAfter installation, restart your terminal or run:
source $HOME/.cargo/envVerify installation:
uv --versionuv syncThis will:
- Create a virtual environment
- Install all required Python packages
- Set up the project for development
cp env.template .envEdit the .env file and fill in the required configurations:
# Open .env in your preferred editor
nano .env
# or
vim .env
# or
code .envChoose one of the following:
# Option 1: OpenAI
LLM_API_KEY=sk-your-openai-key-here
LLM_API_BASE=https://api.openai.com/v1
# Option 2: OpenRouter
OPENROUTER_API_KEY=sk-or-v1-your-openrouter-key
OPENROUTER_API_BASE=https://openrouter.ai/api/v1
# Option 3: Other OpenAI-compatible API
LLM_API_KEY=your-api-key
LLM_API_BASE=https://your-api-endpoint.com/v1# DeepInfra (recommended)
VECTORIZE_API_KEY=your-deepinfra-key
VECTORIZE_API_BASE=https://api.deepinfra.com/v1/openai
# Or configure embedding and rerank separately
EMBEDDING_API_KEY=your-embedding-key
EMBEDDING_API_BASE=https://your-embedding-endpoint.com
RERANK_API_KEY=your-rerank-key
RERANK_API_BASE=https://your-rerank-endpoint.com# Model selection
LLM_MODEL=gpt-4 # or gpt-3.5-turbo, etc.
EMBEDDING_MODEL=BAAI/bge-large-en-v1.5
RERANK_MODEL=BAAI/bge-reranker-large
# Service endpoints (default values shown)
MONGODB_URI=mongodb://admin:memsys123@localhost:27017
ELASTICSEARCH_URL=http://localhost:19200
MILVUS_HOST=localhost
MILVUS_PORT=19530
REDIS_URL=redis://localhost:6379For complete configuration options, see the Configuration Guide.
uv run python src/run.py --port 1995The server will start on http://localhost:1995 by default.
You should see output similar to:
INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:1995 (Press CTRL+C to quit)
The default port is 1995. To use a different port:
uv run python src/run.py --port 9000Open a new terminal and test the API:
curl http://localhost:1995/healthYou should receive a response indicating the service is healthy.
Test the complete workflow with the simple demo:
# In a new terminal (keep the server running)
uv run python src/bootstrap.py demo/simple_demo.pyThis will:
- Store sample conversation messages
- Wait for indexing
- Search for relevant memories
- Display results
If this works, your installation is successful!
Problem: docker-compose up -d fails or services don't start
Solutions:
- Check Docker is running:
docker info - Check port conflicts:
lsof -i :27017,19200,19530,6379 - View logs:
docker-compose logs -f - Restart services:
docker-compose restart
Problem: Elasticsearch or Milvus crashes due to OOM
Solutions:
- Increase Docker memory limit (Docker Desktop > Preferences > Resources)
- Reduce heap size in docker-compose.yml
- Close other memory-intensive applications
Problem: uv sync fails with errors
Solutions:
- Update uv:
curl -LsSf https://astral.sh/uv/install.sh | sh - Clear cache:
uv cache clean - Try with verbose output:
uv sync -v
Problem: Server fails to start or crashes
Solutions:
- Check .env file is configured correctly
- Verify all Docker services are running:
docker-compose ps - Check logs for specific errors
- Ensure port 1995 is not in use:
lsof -i :1995
Problem: Can't connect to MongoDB/Elasticsearch/Milvus
Solutions:
- Verify services are running:
docker-compose ps - Check connection strings in .env
- Use host ports (27017, 19200, 19530) not container ports
- Test connections individually:
# MongoDB mongosh mongodb://admin:memsys123@localhost:27017 # Elasticsearch curl http://localhost:19200 # Redis redis-cli -h localhost -p 6379 ping
For more troubleshooting help, see:
If you prefer not to use Docker, you can install each service manually:
-
MongoDB 7.0+
- See MongoDB Guide
-
Elasticsearch 8.x
- Download from elastic.co
- Configure port 9200
-
Milvus 2.4+
- Follow Milvus installation guide
- Configure port 19530
-
Redis 7.x
- Install via package manager or from redis.io
- Configure port 6379
After installing services manually, update connection strings in .env accordingly.
Now that EverMemOS is installed, you can:
- Try the Demos - Interactive examples showing memory extraction and chat
- Learn the API - Integrate EverMemOS into your application
- Explore Usage Examples - Common usage patterns
- Run Evaluations - Test on benchmark datasets
- Docker Setup Guide - Detailed Docker configuration
- Configuration Guide - Complete configuration options
- MongoDB Guide - MongoDB installation and setup
- Quick Start (README) - Quick start overview
- Getting Started for Developers - Development setup