Tandem seamlessly integrates with Ollama to let you run powerful LLMs locally on your machine, free of charge and with complete privacy.
- Download Ollama from ollama.com/download/mac.
- Install the application.
- Open your terminal and verify installation:
ollama --version
- Install via the official script:
curl -fsSL https://ollama.com/install.sh | sh - Verify running status:
(If you want to run without systemd, follow the manual instructions on their GitHub).
systemctl status ollama
- Download the installer from ollama.com/download/windows.
- Run the
.exefile.
Tandem automatically detects any model you have installed. We recommend GLM-4 or Llama 3 for a good balance of speed and intelligence.
Open your terminal and run:
# Recommended for most users (Fast & Capable)
ollama pull glm-4.7-flash:latest
# Or try Meta's Llama 3 (8B)
ollama pull llama3Other popular models:
mistral: Great general purpose.gemma: Google's open model.codellama: Specialized for coding tasks.
- Open Tandem.
- Click the Model Selector in the bottom chat bar (or go to Settings).
- You will see an Ollama section automatically populated with your installed models.
- Select a model and start chatting!
Note: If you install a new model while Tandem is open, just close and reopen the Model dropdown to refresh the list.