-
-
Notifications
You must be signed in to change notification settings - Fork 164
2.3.65 Satellite DeerFlow
Handle:
deerflow
URL: http://localhost:34681

DeerFlow is a community-driven deep research framework that combines LLMs with web search, web crawling, and multi-agent workflows to generate comprehensive research reports.
Key Features:
- Multiple LLM provider support: Works with Ollama, OpenAI-compatible APIs
- Web search integration: Supports Tavily, InfoQuest, DuckDuckGo, and Brave search engines
- Web crawling tools: Integrates with Jina and InfoQuest for content extraction
- RAG support: Connect to RAGFlow, Qdrant, or Milvus for retrieval-augmented generation
- MCP integration: Model Context Protocol server support for enhanced capabilities
- TTS/STT capabilities: Text-to-speech and speech-to-text functionality
- Multi-agent workflows: Complex research tasks distributed across specialized agents
- Python 3.12+: Modern Python backend with FastAPI
# Build images from GitHub repository
harbor pull deerflow
# Start DeerFlow and open in browser
harbor up deerflow --open- This is a Satellite service (a specialized web UI + backend that integrates with LLM backends like Ollama)
- Default configuration uses DuckDuckGo for search and Jina for web crawling (no API keys required)
Following options can be set via harbor config:
# Web UI port
HARBOR_DEERFLOW_HOST_PORT 34681
# Backend API port (used by the UI)
HARBOR_DEERFLOW_BACKEND_HOST_PORT 34682
# Workspace directory (persistent data)
HARBOR_DEERFLOW_WORKSPACE ./deerflow
# Frontend image configuration
HARBOR_DEERFLOW_IMAGE bytedance/deer-flow-frontend
HARBOR_DEERFLOW_VERSION latest
# Backend image configuration
HARBOR_DEERFLOW_BACKEND_IMAGE bytedance/deer-flow-backend
HARBOR_DEERFLOW_BACKEND_VERSION latest
# Model configuration (used for basic/reasoning/code)
# Note: Must use a model with proper tool calling support (e.g., qwen3:8b, mistral)
HARBOR_DEERFLOW_MODEL qwen3:8b
# Search and crawler configuration
HARBOR_DEERFLOW_SEARCH_API duckduckgo
HARBOR_DEERFLOW_CRAWLER_ENGINE jina
# Additional settings
HARBOR_DEERFLOW_DEBUG false
HARBOR_DEERFLOW_ALLOWED_ORIGINS http://localhost:34681Harbor renders DeerFlow backend configuration at container startup:
- Base fragment:
deerflow/configs/deerflow.config.yaml - Optional overrides (for example, Ollama): additional fragments mounted into
/app/configs/*.yaml - Rendered output inside the container:
/app/conf.yaml
If you use the Ollama variant:
harbor up deerflow ollama --openHarbor will mount an override config fragment that points DeerFlow to Harbor's Ollama endpoint (OpenAI-compatible /v1).
DeerFlow persists data in:
-
deerflow/data/- research reports, cache, and generated content -
deerflow/configs/- Harbor-managed configuration fragments -
deerflow/override.env- optional environment variable overrides
Use the Ollama compose variant for automatic local LLM integration:
harbor up deerflow ollama --openThis automatically configures DeerFlow to use your Harbor Ollama instance.
DeerFlow relies heavily on tool calling (function calling) for its multi-agent workflow. If you see "No tool calls in legacy mode - ending workflow gracefully" in the logs, your model doesn't support tool calling properly.
I've tested that at least qwen3:8b and above/newer models work well.
Change the model:
harbor config set deerflow.model "qwen3:8b"
harbor restart deerflow# All services
harbor logs deerflow
# Backend only
harbor logs deerflow-backend
# Frontend only
harbor logs deerflowThe UI waits for the backend to be healthy before starting. If the UI doesn't start:
harbor logs deerflow-backend
harbor restart deerflowIf DeerFlow fails to start, inspect the rendered config inside the backend container:
# While DeerFlow is running, check the rendered config
harbor exec deerflow-backend cat /app/conf.yaml