# Download Ollama from https://ollama.ai/
# Or use winget
winget install Ollama.Ollama
# Start Ollama
ollama serve
# Download model
ollama pull llama3.2:3bCreate backend/.env:
LOCAL_LLM_ENDPOINT=http://localhost:11434/v1
LOCAL_LLM_MODEL=llama3.2:3b
LOCAL_LLM_TYPE=ollama
PORT=3001
FRONTEND_URL=http://localhost:3002cd backend
npm install
npm start# In another terminal
cd backend
npm run test:agentcd landing-page
npm install
npm run dev- Open browser:
http://localhost:3002 - Click on AI Agent icon (bottom right)
- Check status: Should show "Connected"
- Send a test message
- Verify response from LLM
# Agent Status
curl http://localhost:3001/api/agent/status
# Agent Health
curl http://localhost:3001/api/agent/health
# Test Before Deployment
curl http://localhost:3001/api/agent/test
# Chat (requires agent connected)
curl -X POST http://localhost:3001/api/ai/chat \
-H "Content-Type: application/json" \
-d '{"message": "Ω
Ψ±ΨΨ¨Ψ§Ω"}'- Agent must be connected to external LLM/Cloud AI
- No local fallback - requires real AI service
- Frontend controls agent connection
- Auto-reconnect every 30 seconds
- Test before deployment:
npm run test:agent
If you don't want to use Local LLM:
# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
AZURE_OPENAI_KEY=your-api-key
# OR OpenAI Public API
OPENAI_API_KEY=sk-...- Check Ollama is running:
ollama list - Verify endpoint:
curl http://localhost:11434/v1/models - Check backend logs
- Run test:
npm run test:agent
- Ensure agent is connected
- Check backend logs for errors
- Verify LLM service is responding
- Test endpoint directly
Ready to deploy! π