You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Start with GPU support (automatically downloads gemma3:4b model)
68
+
docker compose -f docker-compose.ollama.yml --profile setup up -d
71
69
72
-
2. Start Sim with local model support:
70
+
# For CPU-only systems:
71
+
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d
72
+
```
73
73
74
+
Wait for the model to download, then visit [http://localhost:3000](http://localhost:3000). Add more models with:
74
75
```bash
75
-
# With NVIDIA GPU support
76
-
docker compose --profile local-gpu -f docker-compose.ollama.yml up -d
77
-
78
-
# Without GPU (CPU only)
79
-
docker compose --profile local-cpu -f docker-compose.ollama.yml up -d
80
-
81
-
# If hosting on a server, update the environment variables in the docker-compose.prod.yml file to include the server's public IP then start again (OLLAMA_URL to i.e. http://1.1.1.1:11434)
0 commit comments