Problem: Something isn't working? Let's fix it fast.
Click your problem:
This means your frontend can't reach the API. 99% of the time, this is an API_URL problem.
-
Check if API is running:
curl http://localhost:5055/health # Should return: {"status": "healthy"} or similar- ❌ Connection refused? → Port 5055 is not exposed. Jump to port fix
- ✅ Got response? → API is running, continue below.
-
Are you accessing from a different machine?
- Your browser on Computer A
- Docker running on Computer B (server, Raspberry Pi, NAS, etc.)
→ You MUST set API_URL
Find your server's IP:
# On the server running Docker: hostname -I # Linux ipconfig # Windows ifconfig # Mac
Set API_URL (replace 192.168.1.100 with YOUR server IP):
Docker Compose - Add to your
docker-compose.yml:environment: - OPENAI_API_KEY=your_key - API_URL=http://192.168.1.100:5055
Docker Run - Add this flag:
-e API_URL=http://192.168.1.100:5055
Then restart:
docker compose down && docker compose up -d # or for docker run, stop and restart the container
-
Still not working?
Check what URL your browser is trying to access:
- Open browser DevTools (F12)
- Go to Network tab
- Refresh the page
- Look for failed requests to
/api/config
The URL should match:
http://YOUR_SERVER_IP:5055/api/configIf it shows
localhost:5055or wrong IP, your API_URL is not set correctly.
Check currently exposed ports:
docker ps
# Look for: 0.0.0.0:5055->5055Not there? Add it:
Docker Compose - Update your docker-compose.yml:
services:
open_notebook:
ports:
- "8502:8502"
- "5055:5055" # Add this line!Docker Run - Add -p 5055:5055:
docker run -d \
-p 8502:8502 \
-p 5055:5055 \ # Add this!
# ... rest of your commandThen restart the container.
Check the logs:
docker logs open-notebook
# or
docker compose logs| Error Message | Fix |
|---|---|
| "Port already in use" | Change port: -p 8503:8502 or stop conflicting service |
| "Permission denied" | Add user to docker group: sudo usermod -aG docker $USER (then log out/in) |
| "Invalid API key" | Check OPENAI_API_KEY in environment variables |
| "Out of memory" | Increase Docker memory limit to 2GB+ in Docker Desktop settings |
| "No such file or directory" | Check volume paths exist and are accessible |
Quick reset:
docker compose down -v
docker compose up -dSymptom: Open Notebook works when accessed on the server itself (localhost:8502) but not from another computer.
This is 100% an API_URL problem.
✅ The Fix:
Your API_URL must match the URL you use to access Open Notebook.
| You access via | Set API_URL to |
|---|---|
http://192.168.1.50:8502 |
http://192.168.1.50:5055 |
http://myserver:8502 |
http://myserver:5055 |
http://10.0.0.5:8502 |
http://10.0.0.5:5055 |
Apply the fix:
-
Edit your
docker-compose.yml:environment: - OPENAI_API_KEY=your_key - API_URL=http://YOUR_SERVER_IP_OR_HOSTNAME:5055
-
Or edit your
docker.envfile:API_URL=http://YOUR_SERVER_IP_OR_HOSTNAME:5055
-
Restart:
docker compose down && docker compose up -d
Common mistakes:
- ❌ Using
localhostin API_URL when accessing remotely - ❌ Using your client computer's IP instead of the server's IP
- ❌ Adding
/apito the end (it's automatic)
You have password auth enabled. Make sure it's set correctly:
environment:
- OPEN_NOTEBOOK_PASSWORD=your_passwordOr provide the password when logging into the web interface.
You added /api to API_URL. Remove it:
❌ Wrong: API_URL=http://192.168.1.100:5055/api
✅ Correct: API_URL=http://192.168.1.100:5055
The /api path is added automatically by the application.
- Check key format (OpenAI keys start with
sk-) - Verify you have credits in your provider account
- Check for spaces around the key in your .env file:
# Wrong - has spaces OPENAI_API_KEY = sk-your-key # Correct OPENAI_API_KEY=sk-your-key
- Test your key directly:
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer YOUR_KEY"
If you're using Ollama or LM Studio:
environment:
- API_CLIENT_TIMEOUT=600 # 10 minutes
- ESPERANTO_LLM_TIMEOUT=180 # 3 minutesRecommended timeouts by setup:
- Cloud APIs (OpenAI, Anthropic): Default (300s)
- Local Ollama with GPU: 600s
- Local Ollama with CPU: 1200s
- Remote LM Studio: 900s
- Cloud APIs: OpenAI, Anthropic, Groq (fastest)
- Local models: Try smaller models first
- Fast:
gemma2:2b,phi3:mini - Medium:
llama3:8b,mistral:7b - Slow:
llama3:70b,mixtral:8x7b
- Fast:
# This prevents first-run delays
ollama run llama3
# Press Ctrl+D to exit after model loads# Container status
docker ps
# Container logs (last 100 lines)
docker logs --tail 100 open-notebook > logs.txt
# Or for docker compose
docker compose logs --tail 100 > logs.txt
# Check resource usage
docker stats --no-stream- Discord - Fastest response from community
- GitHub Issues - Bug reports and features
- Full Troubleshooting Guide - More detailed solutions
Before asking:
- Include your
docker-compose.yml(remove API keys!) - Include relevant logs
- Describe your setup (local vs remote, OS, Docker version)
- What you've already tried
| Scenario | API_URL Value | Example |
|---|---|---|
| Local access only | Not needed | Leave unset |
| Remote on same network | http://SERVER_IP:5055 |
http://192.168.1.100:5055 |
| Remote with hostname | http://HOSTNAME:5055 |
http://myserver.local:5055 |
| Behind reverse proxy (no SSL) | http://DOMAIN:5055 |
http://notebook.local:5055 |
| Behind reverse proxy (SSL) | https://DOMAIN/api |
https://notebook.example.com/api |
Remember: The API_URL is from your browser's perspective, not the server's!