Common issues and their fixes.
Error: Could not locate the bindings file.
This happens when using npm instead of pnpm, or when native modules weren't built properly.
Fix:
# Remove existing node_modules
rm -rf node_modules package-lock.json
# Use pnpm instead
pnpm install
pnpm approve-builds # Select all packages when prompted
pnpm run devMake sure you pulled the latest version of the repository which includes the lockfile:
git pull origin mainSome packages (like better-sqlite3, node-llama-cpp) require Xcode Command Line Tools:
xcode-select --installIf you see EADDRINUSE errors, the server tries to automatically kill processes on the port. If that fails:
# Find what's using the port
lsof -ti:7777
# Kill it
kill -9 <PID>- Make sure no other WhatsApp Web session is interfering
- Try
npm run cli whatsapp logoutfirst, thennpm run cli whatsapp login - Clear the auth folder:
rm -rf ~/.openwhale/whatsapp-auth/
- macOS only β iMessage is not available on other platforms
- Ensure Full Disk Access is granted to your terminal (System Settings β Privacy & Security β Full Disk Access)
- Ensure
imsgCLI is installed:brew install steipete/tap/imsg - Make sure Messages.app is signed in with your Apple ID
- Verify
birdCLI is installed:bird check - Test authentication:
bird whoami - Ensure cookies are fresh β log into Twitter/X in your browser
- Check
.envhasTWITTER_ENABLED=true
- Make sure Ollama is running:
ollama serve - Verify the host in
.env:OLLAMA_HOST=http://localhost:11434
- Pull a model:
ollama pull llama3
The local embedding model (~300MB) downloads automatically on first use. If it fails:
- Check you have enough disk space
- Ensure
node-llama-cppis properly installed:pnpm approve-builds - Fall back to OpenAI or Gemini embeddings by setting the corresponding API key
- Check the server is running at the correct port
- Try a hard refresh (
Cmd+Shift+R) - Clear browser cache for localhost
- Check the terminal for server errors