A proof-of-concept for generating daily team digests from Slack channels using AI agents.
-
Install dependencies:
poetry install
-
Get a Google AI Studio API key from https://aistudio.google.com/app/apikey
-
Set up environment:
cp .env.example .env # Edit .env and add your GOOGLE_API_KEY -
Generate test data and run:
./generate_data.sh --days 3 --channels 3 python -m daily_digest.main --mock
- Multi-team aggregation: Fetches messages from mechanical, electrical, and software team channels
- AI-powered analysis: Uses specialized agents powered by Google Gemini:
- TeamAnalyzer: Extracts updates, blockers, and decisions
- DependencyLinker: Detects cross-team dependencies
- Feedback System: Learns from user reactions
- Personalization: Ranks content by persona (Lead, IC, PM, Executive)
- Smart distribution: Posts to digest channel, threads details, and DMs leadership
- Mock testing: In-process mock Slack client for development
- Synthetic data generation: Creates realistic multi-day conversations for testing
# 1. Install dependencies
poetry install
# 2. Set up environment variables
cp .env.example .env
# Edit .env and add your Google AI Studio API key:
# GOOGLE_API_KEY=your-key-here
# CHAT_MODEL=models/gemini-2.5-flash
# 3. Generate synthetic conversation data (for testing)
./generate_data.sh --days 5 --channels 5
# 4. Run digest with mock Slack data + real AI analysis
poetry run python -m daily_digest.main --mock --preview
# 5. View results in terminal or check data/memory/*.json filesWhen you run with --mock --preview:
- Real Gemini AI analyzes the generated conversations
- Terminal output shows the formatted digest
- Memory files updated:
data/memory/blockers.json- Tracked blockersdata/memory/decisions.json- Team decisionsdata/memory/dependency_graph.json- Cross-team dependencies
Expected output:
- Agent analysis takes 15-20 seconds (real API calls)
- Extracts 9+ events, action items, dependencies
- Shows formatted digest preview in terminal
Creates realistic multi-day Slack conversations for testing the digest pipeline.
Simple command (works from anywhere):
/path/to/ThreadPilot/generate_data.sh --days 5 --channels 5From project directory:
cd ThreadPilot
poetry run generate-data --days 5 --channels 5 --output data/my_conversations.jsonOptions:
--days N: Number of days to generate (default: 5)--channels N: Number of channels to generate (default: 5, max: 5)--output PATH: Output file path (default: data/synthetic_conversations.json)
Generated data includes:
- 16 personas across 5 teams (mechanical, electrical, software, product, QA)
- Story arcs spanning multiple days with dependencies and blockers
- Realistic conversation patterns (standups, bug reports, decisions)
- Thread replies and emoji reactions
- Cross-team dependencies
Test with mock Slack data + real AI analysis (recommended for testing):
poetry run python -m daily_digest.main --mock --preview- Uses fixture data from
fixtures/slack_mock.json - Real Gemini AI analyzes the conversations
- Shows preview in terminal (doesn't post to Slack)
- Takes 20-30 seconds for AI analysis
With mock data, post results to mock Slack:
poetry run python -m daily_digest.main --mockWith real Slack (production):
poetry run python -m daily_digest.mainPreview mode (generate but don't post):
python -m daily_digest.main --previewDebug mode:
python -m daily_digest.main --debug# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=daily_digestsrc/daily_digest/
├── config.py # Channel and distribution configuration
├── slack_client.py # Real + Mock Slack client wrapper
├── message_aggregator.py # Fetch and filter messages
├── distributor.py # Posts to Slack + exports DMs
├── formatter.py # Formats Slack blocks and messages
├── agents/ # LangChain agents
│ ├── base.py
│ ├── extractor.py
│ ├── blocker_detector.py
│ ├── decision_tracker.py
│ └── summarizer.py
├── digest_generator.py # Orchestrates agents
├── formatter.py # Formats Slack output
├── distributor.py # Posts to Slack
├── state.py # Last-run tracking
├── observability.py # Metrics logging
└── main.py # CLI entry point
scripts/
├── generate_synthetic_data.py # Synthetic conversation generator
├── send_dm_bot.py # Standalone DM sender (reads JSON)
└── demo_personalized_dms.py # Example: generate + export DMs
docs/
└── PERSONALIZED_DMS.md # DM bot setup and usage guide
data/
├── synthetic_conversations.json # Generated test data
├── personalized_dms.json # Exported DM messages (JSON)
├── memory/ # Persistent memory stores
│ ├── blockers.json
│ └── decisions.json
└── last_run.json # State tracking
# Step 1: Generate test conversations
cd ThreadPilot
./generate_data.sh --days 3 --channels 3
# Step 2: Run digest with mock Slack + real AI
poetry run python -m daily_digest.main --mock --preview
# Step 3: View results
# - Check terminal output for formatted digest
# - Open data/memory/blockers.json to see extracted blockers
# - Open data/memory/decisions.json to see tracked decisionsWhat to expect:
- Generation takes 2-3 minutes (creates realistic conversations)
- Analysis takes 20-30 seconds (Gemini API calls)
- You'll see HTTP 200 OK logs when Gemini API is working
- Preview shows full digest with extracted events, blockers, decisions
python -m daily_digest.main --preview- API Key required: Get free key from https://aistudio.google.com/app/apikey and add to
.env - Model configuration: Use
CHAT_MODEL=models/gemini-2.5-flash(requires "models/" prefix) - Mock mode: The
--mockflag only mocks Slack client, NOT the AI agents (real Gemini analysis happens) - Rate limits: Free Gemini API has rate limits, data generation includes 5s delays
- Project directory: Poetry commands must be run from the directory containing
pyproject.toml - Viewing logs: Run with
--previewto see output in terminal, or checkdata/memory/*.jsonfiles - Security: Never commit
.envfile (already in.gitignore)
Slack Channels → Aggregator → Agents → Generator → Formatter → Distributor → Slack
↓ ↓ ↓ ↓ ↓ ↓
mechanical filter noise extract combine format #daily-digest
electrical blockers insights blocks leadership DMs
software decisions threads
Digest Pipeline → JSON Export → Standalone Bot → Rate Limiter → Slack DMs
↓ ↓ ↓ ↓
audit log read messages 1 msg/sec all users
retry-able personalized respects individual
by role/team limits delivery
Benefits:
- Decoupled: Digest generation separate from message delivery
- Reliable: JSON acts as audit log, easy to retry failures
- Scalable: Bot handles rate limits independently (~1 msg/sec)
- Flexible: Can re-send same digest or customize per user
Usage:
# Step 1: Generate digest and export personalized DMs
python scripts/demo_personalized_dms.py --export
# Step 2: Send DMs to users
python scripts/send_dm_bot.py --input data/personalized_dms.json --dry-run
python scripts/send_dm_bot.py --input data/personalized_dms.jsonSee docs/PERSONALIZED_DMS.md for detailed setup and usage.