This guide helps you debug DeerFlow workflows, view model outputs, and troubleshoot common issues.
- Viewing Model Output
- Debug Logging Configuration
- LangChain Verbose Logging
- LangSmith Tracing
- Docker Compose Debugging
- Common Issues
When you need to see the complete model output, including tool calls and internal reasoning, you have several options:
Set DEBUG=True in your .env file or configuration:
DEBUG=TrueThis enables debug-level logging throughout the application, showing detailed information about:
- System prompts sent to LLMs
- Model responses
- Tool calls and results
- Workflow state transitions
Add these environment variables to your .env file for detailed LangChain output:
# Enable verbose logging for LangChain
LANGCHAIN_VERBOSE=true
LANGCHAIN_DEBUG=trueThis will show:
- Chain execution steps
- LLM input/output for each call
- Tool invocations
- Intermediate results
For advanced debugging and visualization, configure LangSmith integration:
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="your-api-key"
LANGSMITH_PROJECT="your-project-name"LangSmith provides:
- Visual trace of workflow execution
- Performance metrics
- Token usage statistics
- Error tracking
- Comparison between runs
To get started with LangSmith:
- Sign up at LangSmith
- Create a project
- Copy your API key
- Add the configuration to your
.envfile
DeerFlow uses Python's standard logging levels:
- DEBUG: Detailed diagnostic information
- INFO: General informational messages
- WARNING: Warning messages
- ERROR: Error messages
- CRITICAL: Critical errors
Development mode (console):
uv run main.pyLogs will be printed to the console.
Docker Compose:
# View logs from all services
docker compose logs -f
# View logs from backend only
docker compose logs -f backend
# View logs with timestamps
docker compose logs -f --timestampsWhen LANGCHAIN_VERBOSE=true is enabled, you'll see output like:
> Entering new AgentExecutor chain...
Thought: I need to search for information about quantum computing
Action: web_search
Action Input: "quantum computing basics 2024"
Observation: [Search results...]
Thought: I now have enough information to answer
Final Answer: ...
# Basic verbose mode
LANGCHAIN_VERBOSE=true
# Full debug mode with internal details
LANGCHAIN_DEBUG=true
# Both (recommended for debugging)
LANGCHAIN_VERBOSE=true
LANGCHAIN_DEBUG=true-
Create a LangSmith account: Visit smith.langchain.com
-
Get your API key: Navigate to Settings → API Keys
-
Configure environment variables:
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="lsv2_pt_..."
LANGSMITH_PROJECT="deerflow-debug"- Restart your application
- Visual traces: See the entire workflow execution as a graph
- Performance metrics: Identify slow operations
- Token tracking: Monitor LLM token usage
- Error analysis: Quickly identify failures
- Comparison: Compare different runs side-by-side
- Run your workflow as normal
- Visit smith.langchain.com
- Select your project
- View traces in the "Traces" tab
Add debug environment variables to your docker-compose.yml:
services:
backend:
build:
context: .
dockerfile: Dockerfile
environment:
# Debug settings
- DEBUG=True
- LANGCHAIN_VERBOSE=true
- LANGCHAIN_DEBUG=true
# LangSmith (optional)
- LANGSMITH_TRACING=true
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
- LANGSMITH_API_KEY=${LANGSMITH_API_KEY}
- LANGSMITH_PROJECT=${LANGSMITH_PROJECT}# Start with verbose output
docker compose up
# Or in detached mode and follow logs
docker compose up -d
docker compose logs -f backend# View logs from last 100 lines
docker compose logs --tail=100 backend
# View logs with timestamps
docker compose logs -f --timestamps
# Check container status
docker compose ps
# Restart services
docker compose restart backendSolution: Enable debug logging as described above:
DEBUG=True
LANGCHAIN_VERBOSE=true
LANGCHAIN_DEBUG=trueSolution: Debug logging will show system prompts. Look for log entries like:
[INFO] System Prompt:
You are DeerFlow, a friendly AI assistant...
Solution: Enable LangSmith tracing or check model responses in verbose mode:
LANGCHAIN_VERBOSE=trueSolution: Add custom logging in specific nodes. For example, in src/graph/nodes.py:
import logging
logger = logging.getLogger(__name__)
def my_node(state, config):
logger.debug(f"Node input: {state}")
# ... your code ...
logger.debug(f"Node output: {result}")
return resultSolution: Adjust log level for specific modules:
# In your code
logging.getLogger('langchain').setLevel(logging.WARNING)
logging.getLogger('openai').setLevel(logging.WARNING)Enable LangSmith or add timing logs:
import time
start = time.time()
result = some_function()
logger.info(f"Execution time: {time.time() - start:.2f}s")With LangSmith enabled, token usage is automatically tracked. Alternatively, check model responses:
LANGCHAIN_VERBOSE=trueLook for output like:
Tokens Used: 150
Prompt Tokens: 100
Completion Tokens: 50
If you're still experiencing issues:
- Check existing GitHub Issues
- Enable debug logging and LangSmith tracing
- Collect relevant log output
- Create a new issue with:
- Description of the problem
- Steps to reproduce
- Log output
- Configuration (without sensitive data)